Category Archives: SharePoint Integration

How to: Create a provider-hosted app for SharePoint to access SAP data via SAP Gateway for Microsoft

You can create an app for SharePoint that reads and writes SAP data, and optionally reads and writes SharePoint data, by using SAP Gateway for Microsoft and the Azure AD Authentication Library for .NET. This article provides the procedures for how you can design the app for SharePoint to get authorized access to SAP.

hero-for-hire_basic-layout_600sap_integration_en_round[1]


The following are prerequisites to the procedures in this article:

sap_integration_en_round[2]

Code sample: SharePoint 2013: Using the SAP Gateway to Microsoft in an app for SharePoint

OAuth 2.0 in Azure AD enables applications to access multiple resources hosted by Microsoft Azure and SAP Gateway for Microsoft is one of them. With OAuth 2.0, applications, in addition to users, are security principals. Application principals require authentication and authorization to protected resources in addition to (and sometimes instead of) users. The process involves an OAuth “flow” in which the application, which can be an app for SharePoint, obtains an access token (and refresh token) that is accepted by all of the Microsoft Azure-hosted services and applications that are configured to use Azure AD as an OAuth 2.0 authorization server. The process is very similar to the way that the remote components of a provider-hosted app for SharePoint gets authorization to SharePoint as described in Creating apps for SharePoint that use low-trust authorization and its child articles. However, the ACS authorization system uses Microsoft Azure Access Control Service (ACS) as the trusted token issuer rather than Azure AD.

Tip Tip
If your app for SharePoint accesses SharePoint in addition to accessing SAP Gateway for Microsoft, then it will need to use both systems: Azure AD to get an access token to SAP Gateway for Microsoft and the ACS authorization system to get an access token to SharePoint. The tokens from the two sources are not interchangeable. For more information, see Optionally, add SharePoint access to the ASP.NET application.

For a detailed description and diagram of the OAuth flow used by OAuth 2.0 in Azure AD, see Authorization Code Grant Flow. (For a similar description, and a diagram, of the flow for accessing SharePoint, see See the steps in the Context Token flow.)

Create the Visual Studio solution

  1. Create an App for SharePoint project in Visual Studio with the following steps. (The continuing example in this article assumes you are using C#; but you can start an app for SharePoint project in the Visual Basic section of the new project templates as well.)
    1. In the New app for SharePoint wizard, name the project and click OK. For the continuing example, use SAP2SharePoint.
    2. Specify the domain URL of your Office 365 Developer Site (including a final forward slash) as the debugging site; for example, https://<O365_domain&gt;.sharepoint.com/. Specify Provider-hosted as the app type. Click Next.
    3. Choose a project type. For the continuing example, choose ASP.NET Web Forms Application. (You can also make ASP.NET MVC applications that access SAP Gateway for Microsoft.) Click Next.
    4. Choose Azure ACS as the authentication system. (Your app for SharePoint will use this system if it accesses SharePoint. It does not use this system when it accesses SAP Gateway for Microsoft.) Click Finish.
  2. After the project is created, you are prompted to login to the Office 365 account. Use the credentials of an account administrator; for example Bob@<O365_domain>.onmicrosoft.com.
  3. There are two projects in the Visual Studio solution; the app for SharePoint proper project and an ASP.NET web forms project. Add the Active Directory Authentication Library (ADAL) package to the ASP.NET project with these steps:
    1. Right-click the References folder in the ASP.NET project (named SAP2SharePointWeb in the continuing example) and select Manage NuGet Packages.
    2. In the dialog that opens, select Online on the left. Enter Microsoft.IdentityModel.Clients.ActiveDirectory in the search box.
    3. When the ADAL library appears in the search results, click the Install button beside it, and accept the license when prompted.
  4. Add the Json.net package to the ASP.NET project with these steps:
    1. Enter Json.net in the search box. If this produces too many hits, try searching on Newtonsoft.json.
    2. When Json.net appears in the search results, click the Install button beside it.
  5. Click Close.

Register your web application with Azure AD

  1. Login into the Azure Management portal with your Azure administrator account.
    Note Note
    For security purposes, we recommend against using an administrator account when developing apps.
  2. Choose Active Directory on the left side.
  3. Click on your directory.
  4. Choose APPLICATIONS (on the top navigation bar).
  5. Choose Add on the toolbar at the bottom of the screen.
  6. On the dialog that opens, choose Add an application my organization is developing.
  7. On the ADD APPLICATION dialog, give the application a name. For the continuing example, use ContosoAutomobileCollection.
  8. Choose Web Application And/Or Web API as the application type, and then click the right arrow button.
  9. On the second page of the dialog, use the SSL debugging URL from the ASP.NET project in the Visual Studio solution as the SIGN-ON URL. You can find the URL using the following steps. (You need to register the app initially with the debugging URL so that you can run the Visual Studio debugger (F5). When your app is ready for staging, you will re-register it with its staging Azure Web Site URL. Modify the app and stage it to Azure and Office 365.)
    1. Highlight the ASP.NET project in Solution Explorer.
    2. In the Properties window, copy the value of the SSL URL property. An example is https://localhost:44300/.
    3. Paste it into the SIGN-ON URL on the ADD APPLICATION dialog.
  10. For the APP ID URI, give the application a unique URI, such as the application name appended to the end of the SSL URL; for example https://localhost:44300/ContosoAutomobileCollection.
  11. Click the checkmark button. The Azure dashboard for the application opens with a success message.
  12. Choose CONFIGURE on the top of the page.
  13. Scroll to the CLIENT ID and make a copy of it. You will need it for a later procedure.
  14. In the keys section, create a key. It won’t appear initially. Click SAVE at the bottom of the page and the key will be visible. Make a copy of it. You will need it for a later procedure.
  15. Scroll to permissions to other applications and select your SAP Gateway for Microsoft service application.
  16. Open the Delegated Permissions drop down list and enable the boxes for the permissions to the SAP Gateway for Microsoft service that your app for SharePoint will need.
  17. Click SAVE at the bottom of the screen.

Configure the application to communicate with Azure AD

  1. In Visual Studio, open the web.config file in the ASP.NET project.
  2. In the <appSettings> section, the Office Developer Tools for Visual Studio have added elements for the ClientID and ClientSecret of the app for SharePoint. (These are used in the Azure ACS authorization system if the ASP.NET application accesses SharePoint. You can ignore them for the continuing example, but do not delete them. They are required in provider-hosted apps for SharePoint even if the app is not accessing SharePoint data. Their values will change each time you press F5 in Visual Studio.) Add the following two elements to the section. These are used by the application to authenticate to Azure AD. (Remember that applications, as well as users, are security principals in OAuth-based authentication and authorization systems.)
    <add key="ida:ClientID" value="" />
    <add key="ida:ClientKey" value="" />
    
  3. Insert the client ID that you saved from your Azure AD directory in the earlier procedure as the value of the ida:ClientID key. Leave the casing and punctuation exactly as you copied it and be careful not to include a space character at the beginning or end of the value. For the ida:ClientKey key use the key that you saved from the directory. Again, be careful not to introduce any space characters or change the value in any way. The <appSettings> section should now look something like the following. (The ClientId value may have a GUID or an empty string.)
    <appSettings>
      <add key="ClientId" value="" />
      <add key="ClientSecret" value="LypZu2yVajlHfPLRn5J2hBrwCk5aBOHxE4PtKCjIQkk=" />
      <add key="ida:ClientID" value="4da99afe-08b5-4bce-bc66-5356482ec2df" />
      <add key="ida:ClientKey" value="URwh/oiPay/b5jJWYHgkVdoE/x7gq3zZdtcl/cG14ss=" />
    </appSettings>
    
    NoteNote
    Your application is known to Azure AD by the “localhost” URL you used to register it. The client ID and client key are associated with that identity. When you are ready to stage your application to an Azure Web Site, you will re-register it with a new URL.
  4. Still in the appSettings section, add an Authority key and set its value to the Office 365 domain (some_domain.onmicrosoft.com) of your organizational account. In the continuing example, the organizational account is Bob@<O365_domain>.onmicrosoft.com, so the authority is <O365_domain>.onmicrosoft.com.
    <add key="Authority" value="<O365_domain>.onmicrosoft.com" />
    
  5. Still in the appSettings section, add an AppRedirectUrl key and set its value to the page that the user’s browser should be redirected to after the ASP.NET app has obtained an authorization code from Azure AD. Usually, this is the same page that the user was on when the call to Azure AD was made. In the continuing example, use the SSL URL value with “/Pages/Default.aspx” appended to it as shown below. (This is another value that you will change for staging.)
    Copy
    <add key="AppRedirectUrl" value="https://localhost:44322/Pages/Default.aspx" />
    
  6. Still in the appSettings section, add a ResourceUrl key and set its value to the APP ID URI of SAP Gateway for Microsoft (not the APP ID URI of your ASP.NET application). Obtain this value from the SAP Gateway for Microsoft administrator. The following is an example.
    <add key="ResourceUrl" value="http://<SAP_gateway_domain>.cloudapp.net/" />
    

    The <appSettings> section should now look something like this:

    <appSettings>
      <add key="ClientId" value="06af1059-8916-4851-a271-2705e8cf53c6" />
      <add key="ClientSecret" value="LypZu2yVajlHfPLRn5J2hBrwCk5aBOHxE4PtKCjIQkk=" />
      <add key="ida:ClientID" value="4da99afe-08b5-4bce-bc66-5356482ec2df" />
      <add key="ida:ClientKey" value="URwh/oiPay/b5jJWYHgkVdoE/x7gq3zZdtcl/cG14ss=" />
      <add key="Authority" value="<O365_domain>.onmicrosoft.com" />
      <add key="AppRedirectUrl" value="https://localhost:44322/Pages/Default.aspx" />
      <add key="ResourceUrl" value="http://<SAP_gateway_domain>.cloudapp.net/" />
    </appSettings>
    
  7. Save and close the web.config file.
    Tip Tip
    Do not leave the web.config file open when you run the Visual Studio debugger (F5). The Office Developer Tools for Visual Studio change the ClientId value (not the ida:ClientID) every time you press F5. This requires you to respond to a prompt to reload the web.config file, if it is open, before debugging can execute.

Add a helper class to authenticate to Azure AD

  1. Right-click the ASP.NET project and use the Visual Studio item adding process to add a new class file to the project named AADAuthHelper.cs.
  2. Add the following using statements to the file.
    using Microsoft.IdentityModel.Clients.ActiveDirectory;
    using System.Configuration;
    using System.Web.UI;
    
    
  3. Change the access keyword from public to internal and add the static keyword to the class declaration.
    internal static class AADAuthHelper
    
  4. Add the following fields to the class. These fields store information that your ASP.NET application uses to get access tokens from AAD.
    private static readonly string _authority = ConfigurationManager.AppSettings["Authority"];
    private static readonly string _appRedirectUrl = ConfigurationManager.AppSettings["AppRedirectUrl"];
    private static readonly string _resourceUrl = ConfigurationManager.AppSettings["ResourceUrl"];     
            
    private static readonly ClientCredential _clientCredential = new ClientCredential(
                               ConfigurationManager.AppSettings["ida:ClientID"],
                               ConfigurationManager.AppSettings["ida:ClientKey"]);
    
    private static readonly AuthenticationContext _authenticationContext = 
                new AuthenticationContext("https://login.windows.net/common/" + 
                                          ConfigurationManager.AppSettings["Authority"]);
    
  5. Add the following property to the class. This property holds the URL to the Azure AD login screen.
    private static string AuthorizeUrl
    {
        get
        {
            return string.Format("https://login.windows.net/{0}/oauth2/authorize?response_type=code&redirect_uri={1}&client_id={2}&state={3}",
                _authority,
                _appRedirectUrl,
                _clientCredential.OwnerId,
                Guid.NewGuid().ToString());
        }
    }
    
    
  6. Add the following properties to the class. These cache the access and refresh tokens and check their validity.
    public static Tuple<string, DateTimeOffset> AccessToken
    {
        get {
    return HttpContext.Current.Session["AccessTokenWithExpireTime-" + _resourceUrl] 
           as Tuple<string, DateTimeOffset>;
        }
    
        set { HttpContext.Current.Session["AccessTokenWithExpireTime-" + _resourceUrl] = value; }
    }
    
    private static bool IsAccessTokenValid
    {
       get 
       { 
           return AccessToken != null &&
           !string.IsNullOrEmpty(AccessToken.Item1) &&
           AccessToken.Item2 > DateTimeOffset.UtcNow;
       }
    }
    
    private static string RefreshToken
    {
        get { return HttpContext.Current.Session["RefreshToken" + _resourceUrl] as string; }
        set { HttpContext.Current.Session["RefreshToken-" + _resourceUrl] = value; }
    }
    
    private static bool IsRefreshTokenValid
    {
        get { return !string.IsNullOrEmpty(RefreshToken); }
    }
    
    
  7. Add the following methods to the class. These are used to check the validity of the authorization code and to obtain an access token from Azure AD by using either an authentication code or a refresh token.
    private static bool IsAuthorizationCodeNotNull(string authCode)
    {
        return !string.IsNullOrEmpty(authCode);
    }
    
    private static Tuple<Tuple<string,DateTimeOffset>,string> AcquireTokensUsingAuthCode(string authCode)
    {
        var authResult = _authenticationContext.AcquireTokenByAuthorizationCode(
                    authCode,
                    new Uri(_appRedirectUrl),
                    _clientCredential,
                    _resourceUrl);
    
        return new Tuple<Tuple<string, DateTimeOffset>, string>(
                    new Tuple<string, DateTimeOffset>(authResult.AccessToken, authResult.ExpiresOn), 
                    authResult.RefreshToken);
    }
    
    private static Tuple<string, DateTimeOffset> RenewAccessTokenUsingRefreshToken()
    {
        var authResult = _authenticationContext.AcquireTokenByRefreshToken(
                             RefreshToken,
                             _clientCredential.OwnerId,
                             _clientCredential,
                             _resourceUrl);
    
        return new Tuple<string, DateTimeOffset>(authResult.AccessToken, authResult.ExpiresOn);
    }
    
    
  8. Add the following method to the class. It is called from the ASP.NET code behind to obtain a valid access token before a call is made to get SAP data via SAP Gateway for Microsoft.
    internal static void EnsureValidAccessToken(Page page)
    {
        if (IsAccessTokenValid) 
        {
            return;
        }
        else if (IsRefreshTokenValid) 
        {
            AccessToken = RenewAccessTokenUsingRefreshToken();
            return;
        }
        else if (IsAuthorizationCodeNotNull(page.Request.QueryString["code"]))
        {
            Tuple<Tuple<string, DateTimeOffset>, string> tokens = null;
            try
            {
                tokens = AcquireTokensUsingAuthCode(page.Request.QueryString["code"]);
            }
            catch 
            {
                page.Response.Redirect(AuthorizeUrl);
            }
            AccessToken = tokens.Item1;
            RefreshToken = tokens.Item2;
            return;
        }
        else
        {
            page.Response.Redirect(AuthorizeUrl);
        }
    }
    
Tip Tip
The AADAuthHelper class has only minimal error handling. For a robust, production quality app for SharePoint, add more error handling as described in this MSDN node: Error Handling in OAuth 2.0.

Create data model classes

  1. Create one or more classes to model the data that your app gets from SAP. In the continuing example, there is just one data model class. Right-click the ASP.NET project and use the Visual Studio item adding process to add a new class file to the project named Automobile.cs.
  2. Add the following code to the body of the class:
    public string Price;
    public string Brand;
    public string Model;
    public int Year;
    public string Engine;
    public int MaxPower;
    public string BodyStyle;
    public string Transmission;
    

Add code behind to get data from SAP via the SAP Gateway for Microsoft

  1. Open the Default.aspx.cs file and add the following using statements.
    using System.Net;
    using Newtonsoft.Json.Linq;
    
  2. Add a const declaration to the Default class whose value is the base URL of the SAP OData endpoint that the app will be accessing. The following is an example:
    private const string SAP_ODATA_URL = @"https://<SAP_gateway_domain>.cloudapp.net:8081/perf/sap/opu/odata/sap/ZCAR_POC_SRV/";
    
  3. The Office Developer Tools for Visual Studio have added a Page_PreInit method and a Page_Load method. Comment out the code inside the Page_Load method and comment out the whole Page_Init method. This code is not used in this sample. (If your app for SharePoint is going to access SharePoint, then you restore this code. See Optionally, add SharePoint access to the ASP.NET application.)
  4. Add the following line to the top of the Page_Load method. This will ease the process of debugging because your ASP.NET application is communicating with SAP Gateway for Microsoft using SSL (HTTPS); but your “localhost:port” server is not configured to trust the certificate of SAP Gateway for Microsoft. Without this line of code, you would get an invalid certificate warning before Default.aspx will open. Some browsers allow you to click past this error, but some will not let you open Default.aspx at all.
    ServicePointManager.ServerCertificateValidationCallback = (s, cert, chain, errors) => true;
    
    Important noteImportant
    Delete this line when you are ready to deploy the ASP.NET application to staging. See Modify the app and stage it to Azure and Office 365.
  5. Add the following code to the Page_Load method. The string you pass to the GetSAPData method is an OData query.
    if (!IsPostBack)
    {
        GetSAPData("DataCollection?$top=3");
    }
    
    
  6. Add the following method to the Default class. This method first ensures that the cache for the access token has a valid access token in it that has been obtained from Azure AD. It then creates an HTTP GET Request that includes the access token and sends it to the SAP OData endpoint. The result is returned as a JSON object that is converted to a .NET List object. Three properties of the items are used in an array that is bound to the DataListView.
    private void GetSAPData(string oDataQuery)
    {
        AADAuthHelper.EnsureValidAccessToken(this);
    
        using (WebClient client = new WebClient())
        {
            client.Headers[HttpRequestHeader.Accept] = "application/json";
            client.Headers[HttpRequestHeader.Authorization] = "Bearer " + AADAuthHelper.AccessToken.Item1;
            var jsonString = client.DownloadString(SAP_ODATA_URL + oDataQuery);
            var jsonValue = JObject.Parse(jsonString)["d"]["results"];
            var dataCol = jsonValue.ToObject<List<Automobile>>();
    
            var dataList = dataCol.Select((item) => {
                return item.Brand + " " + item.Model + " " + item.Price;
                }).ToArray();
    
            DataListView.DataSource = dataList;
            DataListView.DataBind();
        }
    }
    
    

Create the user interface

  1. Open the Default.aspx file and add the following markup to the form of the page:
    <div>
      <h3>Data from SAP via SAP Gateway for Microsoft</h3>
    
      <asp:ListView runat="server" ID="DataListView">
        <ItemTemplate>
          <tr runat="server">
            <td runat="server">
              <asp:Label ID="DataLabel" runat="server"
                Text="<%# Container.DataItem.ToString()%>" /><br />
            </td>
          </tr>
        </ItemTemplate>
      </asp:ListView>
    </div>
    
  2. Optionally, give the web page the “look ‘n’ feel” of a SharePoint page with the SharePoint Chrome Control and the host SharePoint website’s style sheet.

Test the app with F5 in Visual Studio

  1. Press F5 in Visual Studio.
  2. The first time that you use F5, you may be prompted to login to the Developer Site that you are using. Use the site administrator credentials. In the continuing example, it is Bob@<O365_domain>.onmicrosoft.com.
  3. The first time that you use F5, you are prompted to grant permissions to the app. Click Trust It.
  4. After a brief delay while the access token is being obtained, the Default.aspx page opens. Verify that the SAP data appears.

Optionally, add SharePoint access to the ASP.NET application


Of course, your app for SharePoint doesn’t have to expose only SAP data in a web page launched from SharePoint. It can also create, read, update, and delete (CRUD) SharePoint data. Your code behind can do this using either the SharePoint client object model (CSOM) or the REST APIs of SharePoint. The CSOM is deployed as a pair of assemblies that the Office Developer Tools for Visual Studio automatically included in the ASP.NET project and set to Copy Local in Visual Studio so that they are included in the ASP.NET application package. For information about using CSOM, start with How to: Complete basic operations using SharePoint 2013 client library code. For information about using the REST APIs, start with Understanding and Using the SharePoint 2013 REST Interface.Regardless, of whether you use CSOM or the REST APIs to access SharePoint, your ASP.NET application must get an access token to SharePoint, just as it does to SAP Gateway for Microsoft. See Understand authentication and authorization to SAP Gateway for Microsoft and SharePoint above. The procedure below provides some basic guidance about how to do this, but we recommend that you first read the following articles:

  1. Open the Default.aspx.cs file and uncomment the Page_PreInit method. Also uncomment the code that the Office Developer Tools for Visual Studio added to the Page_Load method.
  2. If your app for SharePoint is going to access SharePoint data, then you have to cache the SharePoint context token that is POSTed to the Default.aspx page when the app is launched in SharePoint. This is to ensure that the SharePoint context token is not lost when the browser is redirected following the Azure AD authentication. (You have several options for how to cache this context. See OAuth tokens.) The Office Developer Tools for Visual Studio add a SharePointContext.cs file to the ASP.NET project that does most of the work. To use the session cache, you simply add the following code inside the “if (!IsPostBack)” block before the code that calls out to SAP Gateway for Microsoft:
    if (HttpContext.Current.Session["SharePointContext"] == null) 
    {
         HttpContext.Current.Session["SharePointContext"]
            = SharePointContextProvider.Current.GetSharePointContext(Context);
    }
    
  3. The SharePointContext.cs file makes calls to another file that the Office Developer Tools for Visual Studio added to the project: TokenHelper.cs. This file provides most of the code needed to obtain and use access tokens for SharePoint. However, it does not provide any code for renewing an expired access token or an expired refresh token. Nor does it contain any token caching code. For a production quality app for SharePoint, you need to add such code. The caching logic in the preceding step is an example. Your code should also cache the access token and reuse it until it expires. When the access token is expired, your code should use the refresh token to get a new access token. We recommend that you read OAuth tokens.
  4. Add the data calls to SharePoint using either CSOM or REST. The following example is a modification of CSOM code that Office Developer Tools for Visual Studio adds to the Page_Load method. In this example, the code has been moved to a separate method and it begins by retrieving the cached context token.
    Copy
    private void GetSharePointTitle()
    {
        var spContext = HttpContext.Current.Session["SharePointContext"] as SharePointContext;
        using (var clientContext = spContext.CreateUserClientContextForSPHost())
        {
            clientContext.Load(clientContext.Web, web => web.Title);
            clientContext.ExecuteQuery();
            SharePointTitle.Text = "SharePoint web site title is: " + clientContext.Web.Title;
        }
    }
    
  5. Add UI elements to render the SharePoint data. The following shows the HTML control that is referenced in the preceding method:
    <h3>SharePoint title</h3><asp:Label ID="SharePointTitle" runat="server"></asp:Label><br />
    
Note Note
While you are debugging the app for SharePoint, the Office Developer Tools for Visual Studio re-register it with Azure ACS each time you press F5 in Visual Studio. When you stage the app for SharePoint, you have to give it a long-term registration. See the section Modify the app and stage it to Azure and Office 365.

Modify the app and stage it to Azure and Office 365


When you have finished debugging the app for SharePoint using F5 in Visual Studio, you need to deploy the ASP.NET application to an actual Azure Web Site.

Create the Azure Web Site

  1. In the Microsoft Azure portal, open WEB SITES on the left navigation bar.
  2. Click NEW at the bottom of the page and on the NEW dialog select WEB SITE | QUICK CREATE.
  3. Enter a domain name and click CREATE WEB SITE. Make a copy of the new site’s URL. It will have the form my_domain.azurewebsites.net.

Modify the code and markup in the application

  1. In Visual Studio, remove the line ServicePointManager.ServerCertificateValidationCallback = (s, cert, chain, errors) => true; from the Default.aspx.cs file.
  2. Open the web.config file of the ASP.NET project and change the domain part of the value of the AppRedirectUrl key in the appSettings section to the domain of the Azure Web Site. For example, change <add key=”AppRedirectUrl” value=”https://localhost:44322/Pages/Default.aspx&#8221; /> to <add key=”AppRedirectUrl” value=”https://my_domain.azurewebsites.net/Pages/Default.aspx&#8221; />.
  3. Right-click the AppManifest.xml file in the app for SharePoint project and select View Code.
  4. In the StartPage value, replace the string ~remoteAppUrl with the full domain of the Azure Web Site including the protocol; for example https://my_domain.azurewebsites.net. The entire StartPage value should now be: https://my_domain.azurewebsites.net/Pages/Default.aspx. (Usually, the StartPage value is exactly the same as the value of the AppRedirectUrl key in the web.config file.)

Modify the AAD registration and register the app with ACS

  1. Login into Azure Management portal with your Azure administrator account.
  2. Choose Active Directory on the left side.
  3. Click on your directory.
  4. Choose APPLICATIONS (on the top navigation bar).
  5. Open the application you created. In the continuing example, it is ContosoAutomobileCollection.
  6. For each of the following values, change the “localhost:port” part of the value to the domain of your new Azure Web Site:
    • SIGN-ON URL
    • APP ID URI
    • REPLY URL

    For example, if the APP ID URI is https://localhost:44304/ContosoAutomobileCollection, change it to https://<my_domain&gt;.azurewebsites.net/ContosoAutomobileCollection.

  7. Click SAVE at the bottom of the screen.
  8. Register the app with Azure ACS. This must be done even if the app does not access SharePoint and will not use tokens from ACS, because the same process also registers the app with the App Management Service of the Office 365 subscription, which is a requirement. You perform the registration on the AppRegNew.aspx page of any SharePoint website in the Office 365 subscription. For details, see Guidelines for registering apps for SharePoint 2013. As part of this process you will obtain a new client ID and client secret. Insert these values in the web.config for the ClientId (not ida:ClientID) and ClientSecret keys.
    Caution note Caution
    If for any reason you press F5 after making this change, the Office Developer Tools for Visual Studio will overwrite one or both of these values. For that reason, you should keep a record of the values obtained with AppRegNew.aspx and always verify that the values in the web.config are correct just before you publish the ASP.NET application.

Publish the ASP.NET application to Azure and install the app to SharePoint

  1. There are several ways to publish an ASP.NET application to an Azure Web Site. For more information, see How to Deploy an Azure Web Site.
  2. In Visual Studio, right-click the SharePoint app project and select Package. On the Publish your app page that opens, click Package the app. File explorer opens to the folder with the app package.
  3. Login to Office 365 as a global administrator, and navigate to the organization app catalog site collection. (If there isn’t one, create it. See Use the App Catalog to make custom business apps available for your SharePoint Online environment.)
  4. Upload the app package to the app catalog.
  5. Navigate to the Site Contents page of any website in the subscription and click add an app.
  6. On the Your Apps page, scroll to the Apps you can add section and click the icon for your app.
  7. After the app has installed, click it’s icon on the Site Contents page to launch the app.

For more information about installing apps for SharePoint, see Deploying and installing apps for SharePoint: methods and options.

Deploying the app to production


When you have finished all testing you can deploy the app in production. This may require some changes.

  1. If the production domain of the ASP.NET application is different from the staging domain, you will have to change AppRedirectUrl value in the web.config and the StartPage value in the AppManifest.xml file, and repackage the app for SharePoint. See the procedure Modify the code and markup in the application above.
  2. The change in domain also requires that you edit the apps registration with AAD. See the procedure Modify the AAD registration and register the app with ACS above.
  3. The change in domain also requires that you re-register the app with ACS (and the subscription’s App Management Service) as described in the same procedure. (There is no way to edit an app’s registration with ACS.) However, it is not necessary to generate a new client ID or client secret on the AppRegNew.aspx page. You can copy the original values from the ClientId (not ida:ClientID) and ClientSecret keys of the web.config into the AppRegNew form. If you do generate new ones, be sure to copy the new values to the keys in web.config.

A Look At : DevOps and DNS – What Every Developer Should Know

Over the years I have had the to work alongside many really smart, switched on people in the development community. I’ve learnt from them many intermediate and experienced programming skills. Generally when it comes understanding the very basis of how the internet functions using DNS, most of these very same experienced developers haven’t got a clue.

I wrote this post to hopefully help pay back some of the awesome karma they  have earned helping me over the years, by teaching them something in return. Lets learn about DNS.

imageDNS is a huge part of the inner workings of the internet. spend a considerable amount of man hours a year ensuring the sites they build are fast and respond well to user interaction by setting up expensive CDN’s, recompressing images, minifying script files and much more – but what a lot of us don’t understand is that DNS server configuration can make a big difference to the speed of your site – hopefully at the end of this post you’ll feel empowered to get the most out of this part of your website’s configuration.

What I will in this post:

Why does DNS matter to you?

Well it’s simple – if you are a developer it matters to you because:

  • You own , and up until now your has taken care of your DNS for you – but you need to know what’s going on in case something bad happens…
  • Maybe the you have allows you to manage your own DNS using a web interface, but you haven’t a clue what you are doing.
  • The DNS that your webhost or ISP offers you is probably not the fastest – if your website grows over time, you probably want to setup your own DNS or manage it through a dedicated service such as DNSMadeEasy, ZoneEdit or DynDNS.

First up: How the internet works (DNS)

If you already know how this works feel to step ahead.

image

In really simple terms, when you enter a URL and hit enter, apart from magical unicorns rendering the requested page in your browser window, the interwebs works kinda like this:

  • You want to visit to a domain name, so your PC first checks its internal DNS cache to see if it’s looked it up recently – if so it uses this record
  • Your PC then asks your DNS server (probably configured by your router or ISP when you first started your PC) for the IP address of the server hosting the domain name you want to visit.
  • Your ISP’s DNS server looks up the root DNS servers for the world to find out who takes care of the DNS configuration for the domain you want to visit.
  • Your ISP’s DNS server then asks this authorative DNS server for the domain name you want’s IP information, fetches it, caches it, and then returns it to your PC.
  • Your browser connects to this IP address and asks for a web page.

There are a number of different scenarios that play a role in special circumstances with the above but I’m not really going to cover everything in this post.

What DNS does do:

  • Converts hostnames to IP addresses.
  • Stores mail delivery information for a domain.
  • Stores miscellaneous information against a domain name (TXT records).

What DNS doesn’t/cannot do

  • Redirects users to a different server/site.
  • Configure which port the client is connecting to (not entirely true; SRV records are used for protocol/port mappings for services).

Tools for the Job

One of the coolest things about the tools you’ll need for this blog post, is where I tell you that independent of which operating system you are using, you almost certainly have everything you need to query and test the DNS configuration of your website installed right now without you even knowing

The Swiss Army Knife of DNS inspection is the command line tool NSLOOKUP. This is installed by default in nearly every OS you’ll ever need it on.

NSLOOKUP on Windows

NSLOOKUP on Unix/Linux/Mac OSX

Another cool thing the usage is the same on most platforms as well.

To run NSLOOKUP simply open a terminal/command prompt and type

nslookup

image

The first thing you’ll notice about the pic above is that the first thing NSLOOKUP tells me upon launch is the current DNS server that it will use for its lookups.

By default NSLOOKUP will use your current machine’s DNS settings for its DNS lookups. This can sometimes give you different results from the rest of the world as your internal DNS at your place of work/ISP may be returning different results so they can route, say your office mail, to the internal mail server IP rather than the external internet/DMZ IP address.

Lets change this to use Google’s global DNS server to get a better global view (what others see when they surf the web outside my network) on our DNS queries, by typing;

server 8.8.8.8

Now if I query this blog’s domain name “diaryofaninja.com.” (ensure you place the additional period on the end of this query to avoid any internal DNS suffixes to be added) I should get back the A record for my domain; (A records are the default query type used by NSLOOKUP – I discuss DNS record types further in this post below).

image

An overview of common DNS record types

Below is a simple overview of all the common types of DNS records and some example scenarios.

All records usually share the following common properties:

Value – this is usually the contents of the records. If it is an A record this is the IP address for that A record

TTL – this is the “Time To Live” in seconds for a DNS record and basically means that DNS Clients of Servers accessing the requested record should not cache the record any longer than this value. If this value is set to 3600 this means to cache the returned record’s value for an hour (these values are usually the reason that IT people talk about DNS changes taking “24-48 hours” as these values are usually set quite high on hostnames that are quite static so that they offer the best performance by being kept in cache.

SOA (Start of Authority) records

SOA records (start of authority records) are the root of your domain’s registration. SOA records are created by your domain name registrar in the parent domain’s DNS servers (in the case of a .com domain the SOA record is created in the DNS servers for the .com root domain. In an SOA record the hostnames or IP addresses of your domain’s DNS servers are stored. These tell the internet’s root DNS servers (mother ship DNS servers) where to ask for the rest of your domain’s DNS configuration (such as A, MX and TXT records). When a client (a web browser, a mail server, an FTP client etc.) wants to connect to part of your website, it asks the locally configured DNS server for the record –  the server in turn looks for the SOA records for your domain so it knows which DNS server to ask about it.

Consider these records as the source of “which DNS server stores all the information about the website I want to look up”.

Hostname (A and CNAME) records

A records store information about a hostname record for your domain name. These list the IP address that a client should talk to when using a certain hostname.

If you had an address of http://mywebsite.examplecompany.com into your web browser this would refer to the A record “mywebsite” on the domain “examplecompany.com”.

If you have multiple A records with the same hostname, clients will receive a list of all the records. The order of this list will change with iterate each time you query the DNS server – this is called round-robin DNS and it a simple way to spread load across multiple servers .

AAAA records are the same as A records, only they stores the 128bit IPv6 address of a server instead of the IPv4 IP address – as the world shifts to using IPv6 these records will gain more relevance, but if your webhost supports IPv6 its worth setting these records up now, so that any visitors using IPv6 can access your website.

A CNAME record (Canonical name) is basically  an alias for an A record. This tells whoever is asking, that the DNS information for the requested hostname is stored in another record somewhere else on the internet. This other record might not even be on the same domain name or on the same DNS server. CNAME’s are very powerful as they allow you simplify your domains DNS records by centralising the information somewhere else. ISPs and webhosts commonly use CNAMEs to centralise the DNS configuration storage for things like mail or web server’s by allowing you to keep all the configuration details on a parent domain name.

It is important to note that root records for a domain name (I.e the empty A record for mydomain.com) cannot be a CNAME. The simple hard and fast reason why, is that CNAME’s cannot live on the same node in a DNS forest as any other type of record – because the very nature of a CNAME record defines that all configuration for that node is stored somewhere else, and given you store other information at the root of your domain other than your A record (MX records for mail etc.) this would break every other record’s functionality. This is mentioned specifically in the RFC for DNS, section 3.6.2.

An example of CNAME usage, is when most webhosting company web servers have a hostname such as web0234.mywebhost.com

When setting up your website, your webhost might for instance make the “http://www.yourwebsite.com record for your website a CNAME that has the value web0234.mywebhost.com so that when trying to access “http://www.yourwebsite.com” DNS clients look up the IP address for “web0234.mywebhost.com”. This makes their life easier if the IP address for this web server changes, as they only have to update a single DNS record, instead of updating all their clients DNS records.

To reiterate this to make it crystal clear:
CNAME’s are not a redirection. They are a reference pointer for a hostname. All they tell DNS clients, is that the configuration information for the hostname being queried is the same as can be found by querying the other hostname.

Illustration – Visiting a website

In the case that you want to visit http://www.google.com your computer does the following:

  1. Using the local machine’s DNS client your operating system talks to the locally configured DNS server for your local network/ISP’s network.
  2. This DNS server inturn looks up the DNS server for google.com by first looking up the SOA record for google.com and then connecting to the DNS server listed.
  3. Your local DNS server then asks the DNS server for google.com for the A record listed for www – the google.com DNS server will return an IP address for http://www.google.com. Your ISP or local network’s DNS server, along with returning it to you, will then cache this record for as long as the TTL (time to live) property of the record says.
  4. Your browser then connects to this returned IP address listed on port 80 and asks for the web page.

All of the above happens in milliseconds – but you can understand that if the google.com DNS server is slow in responding this negatively affects your browsing experience.

A records and CNAME records have a TTL (Time To Live) property to indicate how long they can be cached for.

Mail (MX)

MX records are the internets way of telling mail where to be delivered. They list the hostname or IP address of the mail server that handles mail for a given domain name. If a mail server is looking to deliver mail to “examplecompany.com” it will look up the MX record for this domain.

MX records have both a TTL (Time To Live) and a Priority (a weighting to give the order in which they should be looked up).

Illustration – Sending an e-mail to a friend

In the case that you send an email to your friend at myfriend@otherexamplecompany.com your local SMTP mail server (usually at your ISP) does the following:

  1. Your mail server connects to its local network/ISP’s DNS server and asks for the MX record for otherexamplecompany.com.
  2. Your local DNS server or ISP’s DNS server looks up the SOA record for otherexamplecompany.com and then connects to the DNS server listed.
  3. It asks for the MX records for this domain and is returned a list of hostnames.
  4. it grabs the first hostname from the list (order in ascending order by Priority), runs a second query for the IP address of this mail server and returns this IP address to your mail server.
  5. your mail server then connects to this IP address on the SMTP TCP port 25 and delivers your mail.

Text Records (TXT)

TXT records are a powerful addition to the DNS standard that allow the storage of miscellaneous information for a hostname. Many web developers, system admins and the like use TXT records for the storage of information such as SPF records and DKIM public keys.

TXT records have a TTL (Time To Live) property to indicate how long they can be cached for.

Name Server Records (NS)

Name Server Records are placed in your domain’s DNS when you wish to store the configuration of part of your domain’s DNS on a separate DNS server. This can be very handy if you want to give control of a subdomain to another person/entity.

i.e. my site is http://www.widgetsareus.com and I manage all of the DNS for this domain, but I would like support.widgetsareus.com and any child sub domains of this domain to be managed by the company we outsource all of our customer support to – therefore I have setup an NS record for support.widgetsareus.com to point at our support partner’s DNS servers.

Setting up a domain from scratch

If you are setting up a domain you’ve just purchased from scratch you’ll need to do the following:

Setup your website (A records)

  1. Setup a DNS server to store the configuration for yourdomain.com
    This might be at your webhost, or might be a third party service such as DNSMadeEasy, ZoneEdit or DynDNS.
  2. Set the Nameserver SOA records for your domain name to the above DNS server’s IP address or hostname (at your domain registrar)
  3. Create a new root record to point at your webserver’s IP address (this is simply an A record with an empty hostname) in your domain name’s DNS forest.
  4. Create a new www A record that points at your webserver’s IP address in your domain’s DNS server
  5. Setup your webserver’s website to listen for the host-header of your domain name (IIS calls this a “binding”).
  6. Test your DNS as below.
  7. Try and access your site in a web browser.

Testing your website’s A record

In a command prompt/terminal type NSLOOKUP

Enter “yourdomainname.com.” (including the extra period on the end) and hit enter

Check that the returned record value/IP address is that of your web server.

image

Remember to do the same for “www.youdomainname.com.” if you also use www. in your domain name.

Setup your website’s mail (MX record)

  1. Setup a DNS server to store the configuration for yourdomain.com  (Follow steps 1 and 2 above from your website if you haven’t already).
  2. Create a new MX record that points at your mail servers IP address or hostname.
  3. Setup your mail server to listen to receive mail for yourdomain.com
  4. Test that all the above is setup correctly using nslookup as per below.
  5. Try and send and receive email to and from your domain name.
  6. Setup SPF records, to verify your mail server’s ability to send mail on behalf of your domain name

Testing your website’s MX record

In a command prompt/terminal type NSLOOKUP

Enter “set type=mx” and hit enter. This set the query type to MX records.

Enter “yourdomainname.com.” (including the extra period on the end) and hit enter

Check that the returned record value/IP address(es) is that of your mail server.

image

Investigating Common Problems

How do I check what DNS server is authorative for my domain name?

You’ve set up your websites DNS, everything is fine; then one day, everyone visiting your site is directed to a site that isn’t yours!

To check which DNS server is authorative for your domain name, first open a command prompt or terminal.

Type “NSLOOKUP” and hit enter

Type ”set type=ns” and hit enter. This sets the query type to NS (NameServer) records.

Type “yourdomainname.com.” and hit enter (make sure you put the extra dot on the end.)

Confirm that the nameserver’s returned are yours.

image

How do I check what IP address my site is currently pointing at?

In a command prompt/terminal launch NSLOOKUP

Enter “yourdomainname.com.” (including the extra period on the end) and hit enter

Check that the returned record value/IP address is that of your web server.

image

Remember to do the same for “www.youdomainname.com.” if you also use www. in your domain name.

What is split DNS?

Split DNS is when you run a separate DNS forest for a domain name both on your external DNS servers (for everyone else to see) and also internally for staff or local users to see.

This allows you to do things like:

  • Ensure local users talk to your mail server (or any other internal server) using the internal IP address, and internet users talk to your mail server’s external DMZ IP address.
  • Block access to certain sites by giving incorrect or different DNS results for these site’s domain names. This if often how many net nanny etc softwares work.

For some users my sites seems to be served from a different address – how do I check “what the world sees” vs. “what I see”?

Many things can occur that result in some people seeing different DNS results to others:

  • Your ISP/company’s DNS server may have an older cached record to the current live record
  • Your local computer may be caching the DNS record you are requesting
  • Your local DNS server may be fetching the records for your domain from a different authorative DNS server than the rest of the world.

How do you investigate these things?

The easiest way to investigate these things is to query an external DNS server that you know is good for the records you want, to get a better idea of how the rest of the world sees things.

A really good server that is easy to remember are the ones owned by Google. The primary and secondary DNS server for Google’s Public DNS system are “8.8.8.8” and “8.8.4.4” respectively.

You can use whatever DNS servers you think are more likely to see the correct values.

To do this, open a command prompt/console.

Type “NSLOOKUP” and hit enter

Type “server 8.8.8.8” and hit enter. This sets the DNS server we will query to the Google Public DNS server’s address.

Type “yourdomainname.com.” and check the resulting record values.

image

How to connect a SharePoint 2013 Document Library to Outlook 2013

 

How to connect a SharePoint 2013 Document Library to Outlook 2013One of the key methods of gaining User Adoption of SharePoint is ensuring and pushing the integration it has with Microsoft Office to information workers. After all, information workers generally use Outlook as their ‘mother-ship’. Getting those users to switch immediately to SharePoint or, asking them to visit a document library in a SharePoint site which they will need to access could take time, especially since it means opening a browser, navigating to the site, covering their beloved Outlook client in the process.

 

The following describes how to connect a typical SharePoint 2013 document library to Outlook 2013 client.

  1. Access your SharePoint site; go into the relevant Documents library. In the below example, I clicked on the default Team Site Documents repository link in the Quick Launch bar, which has around 140 documents.

 

  1. Ok, that’s the Document library displayed, now to get to the Library Tab on the Ribbon bar; the option we are looking for is within the Library options available there.

  1. When the Library ribbon is displayed, click the Connect To Outlook button in the Connect & Export section. Note. If Connect to Outlook is greyed out ensure that Outlook 2013 is fully operational. I’ve come across examples where Outlook is installed, but no email account has been enabled in Outlook – if that’s the case this button will be greyed out.

  1. Once the Connect To Outlook button is clicked, you may receive a warning message informing you that you allow SharePoint to connect with Outlook – Click ALLOW.

  1. Outlook 2013 will be displayed. A dialog will also then be displayed that asks you to confirm that you wish to connect the Document Library to Outlook. The below dialog shows the Site Name and Document Library title, along with the URL of the document library being connected. Below, and to the right is a button that shows more information about the connection (ADVANCED button). The following screenshot shows the information displayed if the Advanced Button is clicked. There is not much you can do on that screen, for now, click YES to confirm the connection.

Here’s an example of the Advanced dialog associated. The most interesting aspect is the Permissions line. For Document Library connection to Outlook, this will be set as READ. This is by design, and for good reason. Things like classified metadata are not exposed to be writeable from Outlook including other document library settings like CheckIn/Out etc. However, this does not prevent you from modifying a file in the resultant list. If the document needs to be updated, simply double-click on the document which will open it in the local application, click the edit offline option, make your changes, click save, click close, and then a prompt should appear to allow you to update to the server.

Once completed, the documents will be listed in Outlook. The following screenshot shows the result of a Shared Document Library from a SharePoint 2013 site connected to Outlook 2013. Note the following features which in my view are awesome for User Adoption particularly from those whose centre of the universe happens to be the Outlook client; without going into jargon try to explain the following features:

  • That users are able to switch from connected library to connected library using the navigation options, each connected library shows the number of unread items (un-previewed or un-opened documents).
  • That each document (if the previewer is available) when clicked on will display a preview of the document; meaning that you can read a Word Document, for example, without having to open it in the client application.
  • That information concerning the state of the document is displayed, showing the last modifier, whether the document is checked out, when it was last modified and the document size.

 

Note. There is a problem I have noticed in the preview section when highlighting any file whilst working with SharePoint 2013, Office 2013 on a sandbox; the message:

‘This file cannot be previewed because of an error with the following previewer: Microsoft xxxxx previewer – To open this file in its own program, double-click it;’.

There is an article that seems to describe the issue (but does not directly mention when it’s likely to occur); and is known to Microsoft. A description of the alternatives whilst a fix is being provided here: http://support.microsoft.com/kb/983097. I will further investigate this and update this article.

How To : Develop a Single-page Application in SharePoint

sharepointKnockoutJS

A single-page application in SharePoint

This app will be a single-page app and heavily javascript based, taking advantage of ajax and web services. As mentioned earlier, we’re going to base this app on the TodoMVC project. More specifically, we’re going to use the Knockout version of the TodoMVC app. So download the knockout todomvc app from github here and incorporate it into your project as follows:

  • Copy the js and bower_components folders into the Scripts folder. To do this quickly:
    1. Copy the folders in Windows Explorer
    2. Visual Studio, enable Project -> Show All Files
    3. The folders will now appear in Solution Explorer. Right click bower_components, and select Include In Project. Do the same for the js folder.
  • Copy the contents of index.html into Views/Index.cshtml (discard whatever is there).
  • Open Views/Index.cshtml and edit the script and css references to point to the Scripts folder (eg. find any references to bower_components and change it to Scripts/bower_components, do the same for js)
  • Open the Shared/_Layout.cshtml file and replace its contents with a single call to @RenderBody():

Your solution should now look like the following:

Hit F5 and you should now have a running TodoMVC app!

Data Storage

In previous articles I have described how to use the Azure database for storage of your data. In a provider-hosted app, you can equally as easily store your data in your own database. However, performance issues aside, it’s really handy to store data in the customer’s SharePoint system itself where possible. This has a number of advantages:

  • Security – you don’t need to store customer data in your own data center
  • App interoperability – if you have multiple apps talking to SharePoint, the architecture will be greatly simplified by by using SharePoint as the central data store
  • Transparency – customers can see their data transparently in lists and understand better what data is stored and how it’s used
  • Tight SharePoint integration – this is often useful since you can later easily take advantage of features such as workflows and event receivers.

In this article therefore I’ll use SharePoint lists for data storage. Let’s create a list to use for storage of our Todo items. Firstly, right-click on the TodoApp project, and select Add -> New Item. Choose List, and call it TodoList.

Click Add. On the next dialog, leave the list template as Default and click Finish.

Now you’ll be presented with a list designer form. By default it already has a Title column, so just add another column called Completed which should be a Boolean:

In my case, Completed was available as a site column. However, it was hidden by default. To remedy this, open up Schema.xml and ensure the Completed field has Hidden=FALSE:

Open up the TodoListInstance designer again and click the Views tab, and add the Completed column to the default view:

Now we’re going to add the server-side code for retrieving the list items. Firstly add a class called TodoItemViewModel and give it the relevant properties<!–. Note that the properties are not capitalised to match what we’ve got on the client side already–>:

    public class TodoItemViewModel
    {
        public string Title { get; set; }
        public bool Completed { get; set; }
    }
    

Next, open up the HomeController class and change the Index method to load the contents of the TodoList:

    [SharePointContextFilter]
    public ActionResult Index()
    {
        List<TodoItemViewModel> result = new List<TodoItemViewModel>();
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if(clientContext != null)
            {
                //Load list items
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItemCollection items = list.GetItems(CamlQuery.CreateAllItemsQuery());
                clientContext.Load(items);
                clientContext.ExecuteQuery();
                //Create the Todo item view models
                result = items.Select(li => new TodoItemViewModel()
                {
                    Title = (string)li["Title"],
                    Completed = (bool)li["Completed"]
                }).ToList();
            }
        }
        return View(result); //Pass the items into the view
    }
    

You may notice we’re using CreateAppOnlyClientContextForSPAppWeb. This means we are accessing the list under the App identity, not the user identity. This is so that we don’t need to require the user themselves to have any special permissions – this should an important consideration throughout your app, as different resources may be only available to certain users and this will form a crucial part of your security model.

Now, since we want to access the list under the app identity, we need to enable that by opening AppManifest.xml under TodoApp, click the Permissions tab and check the option:

Client Side Code with Knockout.js

Now we move to the client side. We’re going to do something a bit scary and rewrite the app.js file, which contains all of the Todo app JavaScript we imported. We’re going to rewrite it so that firstly, we understand it, and secondly, we can integrate it with our AJAX methods more easily. You can always go back to study the original later on as it’ll be more advanced and refined.

If you are new to the Knockout way of data-binding in JavaScript, it might be a good time to visit the knockout.js website to familiarise yourself with it. It’s very powerful and yet pretty easy to pick up. and it makes writing javascript applications incredibly fast and easy.

So open app.js, delete what’s already there, and let’s start by creating a simple ViewModel for each Todo item:

    window.TodoApp = window.TodoApp || {};
    
    window.TodoApp.Todo = function (id, title, completed) {
        var me = this;
        this.id = ko.observable(id);
        this.title = ko.observable(title);
        this.completed = ko.observable(completed || false);
        this.editing = ko.observable(false);

        // edit an item
        this.startEdit = function () {
            me.editing(true);
        };

        // stop editing an item
        this.stopEdit = function (data, event) {
            if (event.keyCode == 13) {
                me.editing(false);
            }
            return true;
        };
    };

    

Next up we add the main ViewModel:

    window.TodoApp.ViewModel = function (spHostUrl) {
        var me = this;
        this.todos = ko.observableArray(); //List of todos
        this.current = ko.observable(); // store the new todo value being entered
        this.showMode = ko.observable('all'); //Current display mode

        //List which is currently displayed
        this.filteredTodos = ko.computed(function () {
            switch (me.showMode()) {
            case 'active':
                return me.todos().filter(function (todo) {
                    return !todo.completed();
                });
            case 'completed':
                return me.todos().filter(function (todo) {
                    return todo.completed();
                });
            default:
                return me.todos();
            }
        });        

        this.addTodo = function (todo) {
            me.todos.push(todo);
        }

        // add a new todo, when enter key is pressed
        this.add = function (data, event) {
            if (event.keyCode == 13) {
                var current = me.current().trim();
                if (current) {        
                    var todo = new window.TodoApp.Todo(0, current);
                    me.addTodo(todo);
                    me.current('');
                }
            }
            return true;
        };

        // remove a single todo
        this.remove = function (todo) {
            me.todos.remove(todo);
        };

        // remove all completed todos
        this.removeCompleted = function () {
            var todos = me.todos().slice(0);
            for (var i = 0; i < todos.length; i++) {
                if (todos[i].completed()) {
                    me.remove(todos[i]);
                }
            }
        };

        // count of all completed todos
        this.completedCount = ko.computed(function () {
            return me.todos().filter(function (todo) {
                return todo.completed();
            }).length;
        });

        // count of todos that are not complete
        this.remainingCount = ko.computed(function () {
            return me.todos().length - me.completedCount();
        });

        // writeable computed observable to handle marking all complete/incomplete
        this.allCompleted = ko.computed({
            //always return true/false based on the done flag of all todos
            read: function () {
                return !me.remainingCount();
            },
            // set all todos to the written value (true/false)
            write: function (newValue) {
                me.todos().forEach(function (todo) {
                    // set even if value is the same, as subscribers are not notified in that case
                    todo.completed(newValue);
                });
            }
        });
    };

Take a read through the above JavaScript – I have tried to simplify it from the original TodoMVC Knockout code so it should be a little more understandable if you’re new to Knockout.

Next we’re going to change the HTML to match our updated JavaScript. Open index.cshtml and replace the entire contents of the <body> tag with the following:

    <section id="todoapp">
        <header id="header">
            <h1>todos</h1>
            <input id="new-todo" data-bind="value: current, valueUpdate: 'afterkeydown', event: { keypress: add }" placeholder="What needs to be done?" autofocus>
        </header>
        <section id="main" data-bind="visible: todos().length">
            <input id="toggle-all" data-bind="checked: allCompleted" type="checkbox">
            <label for="toggle-all">Mark all as complete</label>
            <ul id="todo-list" data-bind="foreach: filteredTodos">
                <li data-bind="css: { completed: completed, editing: editing }">
                    <div class="view">
                        <input class="toggle" data-bind="checked: completed" type="checkbox">
                        <label data-bind="text: title, event: { dblclick: startEdit }"></label>
                        <button class="destroy" data-bind="click: $root.remove"></button>
                    </div>
                    <input class="edit" data-bind="value: title, valueUpdate: 'afterkeydown', event: { keypress: stopEdit }">
                </li>
            </ul>
        </section>
        <footer id="footer" data-bind="visible: completedCount() || remainingCount()">
            <span id="todo-count">
                <strong data-bind="text: remainingCount">0</strong> item(s) left
            </span>
            <ul id="filters">
                <li>
                    <a data-bind="css: { selected: showMode() == 'all' }, click: function(){showMode('all');}">All</a>
                </li>
                <li>
                    <a data-bind="css: { selected: showMode() == 'active' }, click: function(){showMode('active');}">Active</a>
                </li>
                <li>
                    <a data-bind="css: { selected: showMode() == 'completed' }, click: function(){showMode('completed');}">Completed</a>
                </li>
            </ul>
            <button id="clear-completed" data-bind="visible: completedCount, click: removeCompleted">
                Clear completed (<span data-bind="text: completedCount"></span>)
            </button>
        </footer>
    </section>
    <footer id="info">
        <p>Double-click to edit a todo</p>
        <p>Written by <a href="https://github.com/ashish01/knockoutjs-todos">Ashish Sharma</a> and <a href="http://knockmeout.net">Ryan Niemeyer</a></p>
        <p>Part of <a href="http://todomvc.com">TodoMVC</a></p>
    </footer>
    <script src="Scripts/bower_components/todomvc-common/base.js"></script>
    <script src="Scripts/bower_components/knockout.js/knockout.debug.js"></script>
    <script src="Scripts/jquery-1.10.2.js"></script>
    <script src="Scripts/js/app.js"></script>
        
    <script>
        var viewModel = new window.TodoApp.ViewModel(spHostUrl);
        ko.applyBindings(viewModel);
    </script>

Again, this is slightly simplified from the original TodoMVC code.

You should now be able to run the app as before. In the next step we’ll add the client-side code required to load the data back from the server. Update the final script block in index.cshtml to the following:

        
    <script>
        var model = @(Html.Raw(Json.Encode(Model)));
        var viewModel = new window.TodoApp.ViewModel();

        for(var i=0; i<model.length; i++) {
            viewModel.addTodo(new window.TodoApp.Todo(model[i].Id, model[i].Title, model[i].Completed));
        }

        ko.applyBindings(viewModel);
    </script>

    

Now we’re ready to give the app a quick test. Hit F5 and wait for you app to open. Since we don’t yet have a way of saving data, we’re going to add some directly into the SharePoint list. Just browse to the list using the url /[YourSPsite]/TodoApp/Lists/TodoList/ and you should see the standard List UI:

Add a couple of test items and refresh your app:

Saving Data via AJAX

The final piece of the puzzle is to react to user input and save the data back to the server. When an item is deleted, we’ll need to know the Id of the SharePoint ListItem which should be deleted. Therefore, we’ll add an Id property to the TodoItemViewModel class:

    public class TodoItemViewModel
    {
        public int Id { get; set; } //New!
        public string Title { get; set; }
        public bool Completed { get; set; }
    }
    

Don’t forget to set the Id property when we’re loading the items in the HomeController Index method:

Now we’ll create web service methods for adding, deleting and updating an item. Add these methods to HomeController:

    [HttpPost]
    public JsonResult AddItem(string title)
    {
        int newItemId = 0;
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if (clientContext != null)
            {
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItem listItem = list.AddItem(new ListItemCreationInformation());
                listItem["Title"] = title;
                listItem.Update();
                clientContext.Load(listItem, li => li.Id);
                clientContext.ExecuteQuery();
                newItemId = listItem.Id;
            }
        }
        return Json(newItemId, JsonRequestBehavior.AllowGet);
    }

    [HttpPost]
    public JsonResult RemoveItem(int id)
    {
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if (clientContext != null)
            {
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItem item = list.GetItemById(id);
                item.DeleteObject();
                clientContext.ExecuteQuery();
            }
        }
        return new JsonResult();
    }

    [HttpPost]
    public JsonResult UpdateItem(int id, string title, bool completed)
    {
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if (clientContext != null)
            {
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItem item = list.GetItemById(id);
                item["Title"] = title;
                item["Completed"] = completed;
                item.Update();
                clientContext.ExecuteQuery();
            }
        }
        return new JsonResult();
    }
    

The above methods use fairly simple CSOM code to add, remove and update a list item. You can accomplish the same goal by using the JavaScript version of the CSOM which would have the added benefit of calling SharePoint directly without going via the app server. However, I haven’t done this because I want to illustrate how to make AJAX calls between the app client and server. click here to read more about the javascript CSOM library.

Now we’ll add the JavaScript code to call these server methods when an add, update or delete occurs. Earlier you may have noticed that we included a reference to the jQuery script. This is because we’re going to use jQuery’s AJAX helper methods:

Now all we need to do is add code to react to the add, remove and update events, and call the server methods. The first step is to open HomeController.cs and add the SPHostUrl, which is a crucial value for authentication, to the ViewBag:

The purpose of this is so that we can access SPHostUrl on the client, and pass it back to the server during AJAX requests. The authentication cookie will also be passed, and together this forms everything required for authentication to take place.

Next, open up the Index.cshtml file and update the last script block to match the following:

    <script>
        var model = @(Html.Raw(Json.Encode(Model)));
        
        var spHostUrl = '@ViewBag.SPHostUrl'; //<-- Add this line
        var viewModel = new window.TodoApp.ViewModel(spHostUrl); <-- Pass SPHostUrl into the ViewModel

        for(var i=0; i<model.length; i++) {        
            viewModel.addTodo(new window.TodoApp.Todo(model[i].Id, model[i].Title, model[i].Completed));
        }
        ko.applyBindings(viewModel);
    </script>
    

Next we’ll update the app.js code to call the web services at appropriate times by adding HTTP requests using jQuery’s $.ajax helper. Firstly, update the add function:

    this.add = function (data, event) {
        if (event.keyCode == 13) {
            var current = me.current().trim();
            if (current) {
                var todo = new window.TodoApp.Todo(0, current);
                me.addTodo(todo);
                me.current('');

                $.ajax({
                    type: 'POST',
                    url: "/Home/AddItem?SPHostUrl=" + encodeURIComponent(spHostUrl),
                    contentType: "application/json; charset=utf-8",
                    data: JSON.stringify({
                        title: todo.title()
                    }),
                    dataType: "json",
                    success: function (id) {
                        todo.id(id);
                    }
                });
            }
        }
        return true;
    };
    

Next up is the remove function:

    this.remove = function (todo) {
        me.todos.remove(todo);
        $.ajax({
            type: 'POST',
            url: "/Home/RemoveItem?SPHostUrl=" + encodeURIComponent(spHostUrl),
            contentType: "application/json; charset=utf-8",
            data: JSON.stringify({
                id: todo.id()
            }),
            dataType: "json"
        });
    };
    

Now that additions and deletions are being handled, it only remains to handle updates. We will do this by adding code within the addTodo function that listens for changes to the title or completed properties, and submits a change to the server. The updated addTodo function should look like this:

    this.addTodo = function (todo) {
        me.todos.push(todo);
        var adding = true;
        ko.computed(function () {
            var title = todo.title(),
                completed = todo.completed();
            if (!adding) {
                $.ajax({
                    type: 'POST',
                    url: "/Home/UpdateItem?SPHostUrl=" + encodeURIComponent(spHostUrl),
                    contentType: "application/json; charset=utf-8",
                    data: JSON.stringify({
                        id: todo.id(),
                        title: title,
                        completed: completed
                    }),
                    dataType: "json"
                });
            }
        });
        adding = false;
    }
    

In the code above, we add a Knockout computed property which listens to the title and completed observable properties. If either value changes, we make a call to the UpdateItem web service. Note that we use a flag called adding to ensure we don’t call UpdateItems during the initial item addition.

Run the project again. If all has gone to plan, you should hopefully be able to insert, edit and delete items and have them immediately saved back to SharePoint.

Tidying Up

It’s time to do some tidy up to make sure we’re following MVC conventions correctly. Firstly, we should be using the script loader to ensure the JavaScript is loaded as efficiently as possible. Open up App_Start\BundleConfig.cs and replace its contents with the following:

    public static void RegisterBundles(BundleCollection bundles)
    {
        bundles.Add(new ScriptBundle("~/bundles/scripts").Include(
                    "~/Scripts/bower_components/todomvc-common/base.js",
                    "~/Scripts/bower_components/knockout.js/knockout.debug.js",
                    "~/Scripts/jquery-{version}.js",
                    "~/Scripts/js/app.js"));

        bundles.Add(new StyleBundle("~/Content/css").Include(
                    "~/Scripts/bower_components/todomvc-common/base.css"));
    }

This creates two bundles: a CSS bundle and a JavaScript bundle. The advantage of this is that in production, your scripts will be loaded in a single request, and can be easily minified. Update your Index.cshtml file by removing existing script and style references and replace them by referencing your bundles in the <head> tag:

    <head>
        <meta charset="utf-8">
        <title>Knockout.js TodoMVC</title>
        @Styles.Render("~/Content/css")
        @Scripts.Render("~/bundles/scripts")
    </head>
    

Since your app doesn’t use the About or Contact pages which were included in the default project template, you can remove those:

Also, remove the associated methods within the HomeController class.

Adding an App Part (WebPart)

Adding a web part is really easy! You can create an app part to point to any page in your app, and it simply displays in an iframe inside your SharePoint site. In practice you’ll probably want to create a new page. We’re going to create a page that differs slightly in style from the app.

Now, this view will be identical to the main Index view except for styling. We want to avoid copy-pasting the original view into this one, so we’ll created a Shared view that we can use for both.

Start by adding a new View by right-clicking the Views/Home folder and selecting Add -> View: and name it IndexCommon:

Now copy the entire contents of the body tag from Index to IndexCommon. Then you can reference your IndexCommon from Index using a call to @Html.Partial. Your Index.cshtml should look as follows:

    <!doctype html>
    <html lang="en" data-framework="knockoutjs">
    <head>
        <meta charset="utf-8">
        <title>Knockout.js TodoMVC</title>
        @Styles.Render("~/Content/css")
        @Scripts.Render("~/bundles/scripts")
    </head>
    <body>
        @Html.Partial("IndexCommon")
    </body>
    </html>
    

Add a new view called IndexTrimmed:

Again, use the same content for Index. This time however we’re going to make one change, which is to reference an alternative CSS file. So change the @Styles.Render tag as follows:

Now open App_Start/BundleConfig.cs and add a new bundle:

    
    bundles.Add(new StyleBundle("~/Content/css-trimmed").Include(
                "~/Scripts/bower_components/todomvc-common/base-trimmed.css"));
    

And add the css file by copying base.css to base-trimmed.css:

You can make modifications to this CSS file at this point. I deleted the following rules:

  • #todoapp { margin: 130px 0 40px 0; } (the large title at the top)
  • body { margin: 0 auto; } (this caused page centering)
  • background: #eaeaea url(‘bg.png’); (grey background image)

I added these rules:

  • #info { display:none; } (to hide the info footer)

The next step is to add a Controller method for our new View. Open HomeController and add a method called IndexTrimmed. It should hold the exact same content as Index, so I’ve extracted it to a common method:

    [SharePointContextFilter]
    public ActionResult IndexTrimmed()
    {
        return View(GetItems()); //Pass the items into the view
    }

    [SharePointContextFilter]
    public ActionResult Index()
    {
        return View(GetItems()); //Pass the items into the view
    }

    private List<TodoItemViewModel> GetItems()
    {
        List<TodoItemViewModel> result = new List<TodoItemViewModel>();
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        ViewBag.SPHostUrl = spContext.SPHostUrl;
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if (clientContext != null)
            {
                //Load list items
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItemCollection items = list.GetItems(CamlQuery.CreateAllItemsQuery());
                clientContext.Load(items);
                clientContext.ExecuteQuery();
                //Create the Todo item view models
                result = items.ToArray().Select(li => new TodoItemViewModel()
                {
                    Id = (int)li.Id,
                    Title = (string)li["Title"],
                    Completed = (bool)li["Completed"]
                }).ToList();
            }
        }
        return result;
    }

Now we’re ready to add the Web Part. Right click on TodoApp in solution explorer, and select Add -> New Item. Choose Client Web Part:

Give it a name and click Next:. We want to use our new page, so enter its url:

Open the TodoApp/TodoWebPart/Elements.xml file and change the default width from 300px to 600px:

Now run your app again. Make sure the entire app reinstalls, since the app part is installed to the SharePoint server itself. Add the app part by visiting your SharePoint site and clicking Page, then Edit:

Then select Insert -> App Part, and choose the TodoWebPart from the list. Click Add.

Click Save to save your changes to the page, and your web part should be shown:

Making Your Web Part Resize Dynamically

Now, you’ll notice that the Web Part isn’t resizing correctly when you add items to the list. This is because as the app itself gets larger, the App Part doesn’t get larger – it is after all an iframe with a fixed height. This isn’t a problem for apps that don’t resize, for example forms or fixed-length lists. However, we want our app to resize as though it were a normal part of the main website flow.

Luckily, there’s a workaround for this involving posting a message to the iframe and asking it to resize.

Add the following code to your HomeController, in the IndexTrimmed method:

    public ActionResult IndexTrimmed()
    {
        ViewBag.SenderId = HttpContext.Request.Params["SenderId"];
        return View(GetItems());
    }
    

The purpose of the above code is to retrieve the SenderId parameter from the URL, and make it available (via the ViewBag) to the client. This SenderId parameter represents the ID of the iframe which is hosting the app part.

Next, we’ll use the Sender ID in the client to request a resize of the iframe whenever the app resizes – more specifically, whenever an item is added to or removed from the list. This script should be added to the end of the body in IndexTrimmed.cshtml:

    //Retrieve the sender ID from the viewbag
    var senderId = '@ViewBag.SenderId';

    //Create a function that will change the height of the iframe
    function setHeight() {
        var width = $('#todoapp').outerWidth(true),
            height = $('#todoapp').outerHeight(true);

        //Notify SP that it should resize its iframe to the appropriate height.
        window.parent.postMessage('<message senderId=' + senderId + '>resize(' + width + ', ' + (height + 50) + ')</message>', "*");
    }

    //List for changes to the todo list, and call setHeight when that happens
    viewModel.todos.subscribe(setHeight);

    //Set the correct initial height when the app first loads.
    setHeight();
    

For the avoidance of doubt, here’s where this script should go:

The script is intentionally placed below the call to load the IndexCommon partial view, because then it will have access to the JavaScript viewModel object.

Packaging your app

With autohosted apps, the process is simpler: you just right-click on your app and select Publish. From there, you click Package and you are given a .app file. This app file contains everything involved: both the SharePoint app and also the remove app (your Web project). It’s super simple because the autohosting process takes care of creating a Client ID and Client Secret (for OAuth authentiction), and also takes care of physical hosting.

With Provider-hosted apps, things are more complex, and the steps you follow depend on how you want to host your web project. Your first step will be to visit this page and create a Client ID and Client Secret by registering your app. Once you’ve done that, you can right-click your app project and click Package. Then follow the publish wizard which will involve entering your Client ID and Client Secret.

Note that with your provider hosted app, you will deploy your Web project separately from your App project. This is obvious really: you’re going to be hosting your Web project yourself (you’re the provider, after all), and the small app project will be packaged up and listed on the Office Store or sent manually to a customer. Once they install it, it’ll just point to your web server where your web app is installed. To configure this properly, open AppManifest.xml in the App project, and enter the URL to where your app is installed:

How To : Use the Modelling SDK to create UML Diagrams

Use Case Diagrams

A use case diagram is a summary of who uses your application and what they can do with it. It
describes the relationships among requirements, users, and the major components of the system, and
provides an overall view of how the system is used.

uml+activity+diagram+library+mgmt+book+return[1]
Activity Diagrams
Use case diagrams can be broken down into activity diagrams. An activity diagram shows the software
process as the fl ow of work through a series of actions. It can be a useful exercise to draw an
activity diagram showing the major tasks that a user will perform with the software application.

 

Sequence Diagrams

 

Sequence diagrams display interactions between different objects. This interaction usually takes
place as a series of messages between the different objects. Sequence diagrams can be considered an
alternate view to the activity diagram. A sequence diagram can show a clear view of the steps in a
use case. Figure 14-3 shows an example of a sequence diagram.
Component Diagrams

 

Component diagrams help visualize the high-level structure of the software system. They show the
major parts of a system and how those parts interact and depend on each other. One nice feature of
component diagrams is that they show how the different parts of the design interact with each other,
regardless of how those individual parts are actually implemented. Figure 14-4 shows an example of
a component diagram.

 

Class Diagrams

 

Class diagrams describe the objects in the application system. They do this without referencing any
particular implementation of the system itself. This type of UML modeling diagram is also referred
to as a conceptual class diagram. Figure 14-5 shows an example of a class diagram.

How to: Export UML Diagrams to Image Files

You can export a UML document from Visual Studio to an image that is under program control. For example, you might want to do this as part of automatic document generation.

If you want to export a document to an image manually, you can copy and paste the shapes from a diagram into other programs such as Word. You can also print documents to XPS format. For more information, see Export Images of Diagrams.

The following code defines a shortcut menu command, also known as a context menu command, that saves an image to a file.

Note Note

To make this code work as a menu command, you must incorporate it into a MEF component. For more information, seeHow to: Define a Menu Command on a Modeling Diagram.

The code first uses GetObject<T> to get the Diagram of the underlying implementation. This type has a methodCreateBitmap.

namespace SaveToImage
{
  using System.ComponentModel.Composition; // for [Import], [Export]
  using System.Drawing; // for Bitmap
  using System.Drawing.Imaging; // for ImageFormat
  using System.Linq; // for collection extensions
  using System.Windows.Forms; // for SaveFileDialog
  using Microsoft.VisualStudio.Modeling.Diagrams;
    // for Diagram
  using Microsoft.VisualStudio.Modeling.ExtensionEnablement;
    // for IGestureExtension, ICommandExtension, ILinkedUndoContext
  using Microsoft.VisualStudio.ArchitectureTools.Extensibility.Presentation;
    // for IDiagramContext
  using Microsoft.VisualStudio.ArchitectureTools.Extensibility.Uml;
    // for designer extension attributes


  /// 
  /// Called when the user clicks the menu item.
  /// 
  // Context menu command applicable to any UML diagram 
  [Export(typeof(ICommandExtension))]
  [ClassDesignerExtension]
  [UseCaseDesignerExtension]
  [SequenceDesignerExtension]
  [ComponentDesignerExtension]
  [ActivityDesignerExtension]
  class CommandExtension : ICommandExtension
  {
    [Import]
    IDiagramContext Context { get; set; }

    public void Execute(IMenuCommand command)
    {
      // Get the diagram of the underlying implementation.
      Diagram dslDiagram = Context.CurrentDiagram.GetObject();
      if (dslDiagram != null)
      {
        string imageFileName = FileNameFromUser();
        if (!string.IsNullOrEmpty(imageFileName))
        {
          Bitmap bitmap = dslDiagram.CreateBitmap(
           dslDiagram.NestedChildShapes,
           Diagram.CreateBitmapPreference.FavorClarityOverSmallSize);
          bitmap.Save(imageFileName, GetImageType(imageFileName));
        }
      }
    }

    /// 
    /// Called when the user right-clicks the diagram.
    /// Set Enabled and Visible to specify the menu item status.
    /// 
    ///
    public void QueryStatus(IMenuCommand command)
    {
      command.Enabled = Context.CurrentDiagram != null 
        && Context.CurrentDiagram.ChildShapes.Count() > 0;
    }

    /// 
    /// Menu text.
    /// 
    public string Text
    {
      get { return "Save To Image..."; }
    }


    /// 
    /// Ask the user for the path of an image file.
    /// 
    /// image file path, or null
    private string FileNameFromUser()
    {
      SaveFileDialog dialog = new SaveFileDialog();
      dialog.AddExtension = true;
      dialog.DefaultExt = "image.bmp";
      dialog.Filter = "Bitmap ( *.bmp )|*.bmp|JPEG File ( *.jpg )|*.jpg|Enhanced Metafile (*.emf )|*.emf|Portable Network Graphic ( *.png )|*.png";
      dialog.FilterIndex = 1;
      dialog.Title = "Save Diagram to Image";
      return dialog.ShowDialog() == DialogResult.OK ? dialog.FileName : null;
    }

    /// 
    /// Return the appropriate image type for a file extension.
    /// 
    ///
    /// 
    private ImageFormat GetImageType(string fileName)
    {
      string extension = System.IO.Path.GetExtension(fileName).ToLowerInvariant();
      ImageFormat result = ImageFormat.Bmp;
      switch (extension)
      {
        case ".jpg":
          result = ImageFormat.Jpeg;
          break;
        case ".emf":
          result = ImageFormat.Emf;
          break;
        case ".png":
          result = ImageFormat.Png;
          break;
      }
      return result;
    }
  }
}

OneDrive and Yammer takes Social Collaboration to a new level on SharePoint Online

Yammer brings conversations to your OneDrive and SharePoint Online files

Christophe Fiessinger is a group product manager on the enterprise social team.

5810.image-1[1]

At SharePoint Conference 2014 we announced new enterprise social experiences across Office 365 helping businesses work more like networks by leveraging the power of the cloud to bring people together, gain quicker access to relevant insights and help make smarter decisions, faster.

Today we’re announcing the release of one of those features–document conversations–which essentially embeds the social collaboration capabilities of Yammer into the Office apps you use to get work done every day. Get ready for a new, simple way to collaborate on the content you produce with Office Online and store in the cloud in SharePoint Online or OneDrive for Business.

 

Document conversations enable people to share their ideas and expertise around Office documents, images and videos right from within the content they are editing or reviewing. Imagine being able to ask questions, find expertise and offer feedback about content without having to leave the application you’re working in!

 

Because it’s Yammer you can also view and participate in conversations outside your document, on your mobile device, in Microsoft Dynamics CRM or any app where a Yammer feed is embedded! Get ready for a totally new way to produce incredible content!

Here’s how document conversations work. When you open a file in your browser from your cloud store, you see the file on the left with a contextual Yammer conversation in a pane on the right. You can collapse and expand the Yammer pane as needed.

You can do more than join in a conversation from the Yammer pane. You can also post a message, @mention your coworkers, and publish to a Yammer group—either public or private.

Document conversations are easy to join in Yammer as well. If you’re working in Yammer, you’ll see a threaded conversation in the group the post was published in with an icon that enables you to open the file from the cloud location where it lives. The Yammer conversations about files are visible to users in the group but only users who have permission to view or edit the file can open it.

Document conversations are progressively being rolled out to our customers during the course of this summer where it will then be available across all sites within a tenant. To leverage document conversations, you will need to enable Yammer as your default social network.

 

For additional information, see this post: Make Yammer your default social network in Office 365.Get started today by storing your files in the cloud on OneDrive for Business or SharePoint Online, and harness social collaboration across your company with Yammer. In the coming weeks Document conversations will then be activated in your organization and ready to use! Because we continue to innovate and integrate, subscribe to this blog to get the latest updates across Office 365. And don’t hesitate to reach out to us on Twitter or Facebook with your questions or suggestions.

Christophe Fiessinger @cfiessinger

Frequently asked questions

Q: What file types can be used for document conversations?

A: Document Conversations supports over 30 common file types, including .doc, .xls, .ppt, .pdf, .png, .gif, .mp4, .avi, and more.

Q: Can I see conversations in Office desktop or when I send a document as a mail attachment?

A: No. Currently Yammer threads are visible only in files stored in a SharePoint Online document library or OneDrive for Business in Office 365.

Q: What happens if a file is renamed?

A: Document Conversations uses Yammer’s Open Graph protocol, so when a post is published it also contains a link to the file. This link serves as the glue between the file and its associated conversations. Because the link changes according to the file name, when a file is renamed, the link changes, causing the Yammer conversations to become disassociated from the new file name.

Q: Can I start conversations in a Yammer external network?

A: No. We set our initial goal to build Document Conversations to help teams work better internally. While the new document conversations cannot be started in Yammer external networks today, we are exploring ways to extend collaboration around content to beyond your firewall.

How To : Use Javascript to enable Listview Folder Navigation

list view webpart is added to page and user navigate to different folders in the list view, there’s no way for users to know current folder hierarchy. So basically breadcrumb for the list view webpart missing. If there would be a way of showing users the exact location in the folder hierarchy the user is current in (as shown in the image below), wouldn’t be that great?


Image 1: Folder Navigation in action

Deploy the FolderNavigation.js File

Download the FolderNavigation.js and then you can deploy the script either in Layouts folder (in case of full trust solutions) or in Master Page gallery (in case of SharePoint Online or full trust). I would recommend to deploy in Master Page Gallery so that even if you move to cloud, it works without modification. If you deploy in Master page gallery, you don’t need to make any changes, but if you deploy in layouts folder, you need to make small changes in the script which is described in section ‘Deploy JS Link file in Layouts folder’.

 

Option 1: Deploy in Master Page Gallery (Suggested)

If you are dealing with SharePoint Online, you don’t have the option to deploy in Layouts folder. In that case you need to deploy it in Master page gallery. Note, deploying the script in other libraries (like site assets, site library) will not work, you need to deploy in master page gallery. Otherwise you can deploy in Layouts folder as described in next section. To deploy in master page gallery manually, please follow the steps:

  1. Download the JavaScript file attached.
  2. Navigate to Root web => site settings => Master Pages (under group ‘Web Designer Galleries’).
  3. From the ‘New Document’ ribbon try adding ’JavaScript Display Template’ and then upload the FolderNavigation.js file and set properties as shown below:

    Image 2: Upload the JavaScript file in master page gallery

    In the above image, we’ve specified the content type to ‘JavaScript Display Template’, ‘target control type’ to view to use the js file in list view. Also I’ve set target scope to ‘/’ which means all sites and subsites will be applied. If you have a site collection ‘/sites/HR’, then you need to use ‘/Sites/HR’ instead. You can also use List Template ID, if you need.

 

Option 2: Deploy in Layouts Folder

If you are deploying the FolderNavigation.js file in Layouts folder, you need to make small changes in the downloaded script’s RegisterModuleInti method as shown below:

RegisterModuleInit(FolderNavigation.js, folderNavigation);

 

In this case the ‘RegisterModuleInit’ first parameter will be the path relative to Layouts folder. If you deploy your file in path ‘/_Layouts/folder1’, the then you need to modify code as shown below:

RegisterModuleInit(Folder1/FolderNavigation.js, folderNavigation);

 

If you are deploying in other subfolders in Layouts folder, you need to update the path accordingly. What I’ve found till now, you can only deploy in Layouts and Master page gallery. But if you find deploying in other folders works, please share. Basically first paramter in RegisterModuleInti is the file either:

  • Relative to ‘_Layouts’ folder
  • Or Master page gallery in which case the path is started with ‘/_catalogs/masterpage’

 

Use the FolderNavigation.js in List View WebPart

Once you deploy the JavaScript file in Master page gallery or Layouts folder, you need to use it in List View WebPart. Once you deploy the FolderNavigation.js file, you can start using it in list view webpart. Edit the list view web part properties and then under ‘Miscellaneous’ section put the file url for JS Link as shown below:

Image 3: List View WebPart’s JS Like Propery

 

Few points to note for this JS Link:

  • if you have deployed the js file in Master Page Gallery, You can use ~site or ~SiteCollection token, which means current site or current site collection respectively. The URL for JS Link then might be ‘~siteCollection/_catalogs/masterpage/FolderNavigatin.js’ or  ‘~site/_catalogs/masterpage/FolderNavigatin.js’. If you deploy the file in Site Collection Master Page gallery only, you need to use ~siteCollection token in subsites so that it uses the JavaScript file from Site Collection.
  • If you have deployed in Layouts folder, you need to use corresponding path in the JS Link properties. For example if you are deploying the file in Layouts folder, then use ‘/_layouts/15/FolderNavigation.js’, if you are deploying in ‘Layouts/Folder1’ then, use ‘/_layouts/15/Folder1/FolderNavigation.js’. Just to inform again, if you deploy in Layouts folder, you need to make small changes in the JavaScript file as described under ‘Option 2: Deploy in Layouts Folder’ section.

 

JavaScript file Description

In case you are interested to know how the code works, the code snippet is given below:

JavaScript

function replaceQueryStringAndGet(url, key, value) { 
    var re = new RegExp("([?|&])" + key + "=.*?(&|$)""i"); 
    separator = url.indexOf('?') !== -1 ? "&" : "?"; 
    if (url.match(re)) { 
        return url.replace(re, '$1' + key + "=" + value + '$2'); 
    } 
    else { 
        return url + separator + key + "=" + value; 
    } 
} 
 
 
function folderNavigation() { 
    function onPostRender(renderCtx) { 
        if (renderCtx.rootFolder) { 
            var listUrl = decodeURIComponent(renderCtx.listUrlDir); 
            var rootFolder = decodeURIComponent(renderCtx.rootFolder); 
            if (renderCtx.rootFolder == '' || rootFolder.toLowerCase() == listUrl.toLowerCase()) 
                return; 
 
            //get the folder path excluding list url. removing list url will give us path relative to current list url 
            var folderPath = rootFolder.toLowerCase().indexOf(listUrl.toLowerCase()) == 0 ? rootFolder.substr(listUrl.length) : rootFolder; 
            var pathArray = folderPath.split('/'); 
            var navigationItems = new Array(); 
            var currentFolderUrl = listUrl; 
 
            var rootNavItem = 
                { 
                    title: 'Root', 
                    url: replaceQueryStringAndGet(document.location.href, 'RootFolder', listUrl) 
                }; 
            navigationItems.push(rootNavItem); 
 
            for (var index = 0; index < pathArray.length; index++) { 
                if (pathArray[index] == '') 
                    continue; 
                var lastItem = index == pathArray.length - 1; 
                currentFolderUrl += '/' + pathArray[index]; 
                var item = 
                    { 
                        title: pathArray[index], 
                        url: lastItem ? '' : replaceQueryStringAndGet(document.location.href, 'RootFolder'encodeURIComponent(currentFolderUrl)) 
                    }; 
                navigationItems.push(item); 
            } 
            RenderItems(renderCtx, navigationItems); 
        } 
    } 
 
 
    //Add a div and then render navigation items inside span 
    function RenderItems(renderCtx, navigationItems) { 
        if (navigationItems.length == 0return; 
        var folderNavDivId = 'foldernav_' + renderCtx.wpq; 
        var webpartDivId = 'WebPart' + renderCtx.wpq; 
 
 
        //a div is added beneth the header to show folder navigation 
        var folderNavDiv = document.getElementById(folderNavDivId); 
        var webpartDiv = document.getElementById(webpartDivId); 
        if(folderNavDiv!=null){ 
            folderNavDiv.parentNode.removeChild(folderNavDiv); 
            folderNavDiv =null; 
        } 
        if (folderNavDiv == null) { 
            var folderNavDiv = document.createElement('div'); 
            folderNavDiv.setAttribute('id', folderNavDivId) 
            webpartDiv.parentNode.insertBefore(folderNavDiv, webpartDiv); 
            folderNavDiv = document.getElementById(folderNavDivId); 
        } 
 
 
        for (var index = 0; index < navigationItems.length; index++) { 
            if (navigationItems[index].url == ''{ 
                var span = document.createElement('span'); 
                span.innerHTML = navigationItems[index].title; 
                folderNavDiv.appendChild(span); 
            } 
            else { 
                var span = document.createElement('span'); 
                var anchor = document.createElement('a'); 
                anchor.setAttribute('href', navigationItems[index].url); 
                anchor.innerHTML = navigationItems[index].title; 
                span.appendChild(anchor); 
                folderNavDiv.appendChild(span); 
            } 
 
            //add arrow (>) to separate navigation items, except the last one 
            if (index != navigationItems.length - 1{ 
                var span = document.createElement('span'); 
                span.innerHTML = '&nbsp;> '; 
                folderNavDiv.appendChild(span); 
            } 
        } 
    } 
 
 
    function _registerTemplate() { 
        var viewContext = {}; 
 
        viewContext.Templates = {}; 
        viewContext.OnPostRender = onPostRender; 
        SPClientTemplates.TemplateManager.RegisterTemplateOverrides(viewContext); 
    } 
    //delay the execution of the script until clienttempltes.js gets loaded 
    ExecuteOrDelayUntilScriptLoaded(_registerTemplate, 'clienttemplates.js'); 
}; 
 
//RegisterModuleInit ensure folderNavigation() function get executed when Minimum Download Strategy is enabled. 
//if you deploy the FolderNavigation.js file in '_layouts' folder use 'FolderNavigation.js' as first paramter. 
//if you deploy the FolderNavigation.js file in '_layouts/folder/subfolder' folder, use 'folder/subfolder/FolderNavigation.js as first parameter' 
//if you are deploying in master page gallery, use '/_catalogs/masterpage/FolderNavigation.js' as first parameter 
RegisterModuleInit('/_catalogs/masterpage/FolderNavigation.js', folderNavigation); 
 
//this function get executed in case when Minimum Download Strategy not enabled. 
folderNavigation(); 

Let me explain the code briefly:

  • The method ‘replaceQueryStringAndGet’ is used to replace query string parameter with new value. For example if you have url http://abc.com?key=value&name=sohel’  and you would like to replace the query string ‘key’ with value ‘New Value’, you can use the method like

    replaceQueryStringAndGet(http://abc.com?key=value&name=sohel&#8221;,“key”,“New Value”)

  • The function folderNavigation has three methods. Function ‘onPostRender’ is bound to rendering context’s OnPostRender event. The method first checks if the list view’s root folder is not null  and root folder url is not list url (which means user is browsing list’s/library’s root). Then the method split the render context’s folder path and creates navigation items as shown below:

    var item = { title: title, url: lastItem ? : replaceQueryStringAndGet(document.location.href, ‘RootFolder’, encodeURIComponent(rootFolderUrl)) };

    As shown above, in case of last item (which means current folder user browsing), the url is empty as we’ll show a text instead of link for current folder.

  • Function ‘RenderItems’ renders the items in the page. I think this is the place of customisation you might be interested. Having all navigation items passed to this function, you can render your navigation items in your own way. renderContext.wpq is unique webpart id in the page. As shown below with the wpq value of ‘WPQ2’ the webpart is rendered in a div with id ‘WebPartWPQ2’.

    Image 4: List View WebPart in Firebug

    In ‘RenderItems’ function I’ve added a div just before the webpart div ‘WebPartWPQ2’ to put the folder navigation as shown in the image 1.

  • In the method ‘_registerTemplate’, I’ve registered the template and bound the OnPostRender event.
  • The final piece is RegisterModuleInit. In some example you will find the function folderNavigation is executed immediately along with the declaration. However, there’s a problem with Client Side Rendering and Minimal Download Strategy (MDS) working together.
  • To avoid this problem, we need to Register foldernavigation function with RegisterModuleInit to ensure the script get executed in case of MDS-enabled site. The last line ‘folderNavigation()’ will execute normally in case of MDS-disabled site.

SharePoint 2013 and CRM 2011 integration. A customer portal approach

A Look At : Federated Authentication

More and more organisations are looking to collaborate with partners and customers in their ecosystem to help them achieve mutual goals. SharePoint is a great tool for enabling this collaboration but many organisations are reluctant to create and maintain identities for users from other organisations just to allow access to their own SharePoint farm. It’s hardly surprising; identity management is complex and expensive.

You have to pay for servers to host your identity provider (Microsoft Active Directory if you are using Windows); you have to keep it secure; you have to back it up and ensure that it is always available, and you have to pay for someone to maintain and administer it. Identity management becomes even more complicated when your organisation wants to give external users access to SharePoint; you have to ensure that they can only access SharePoint and can’t gain access to other systems; you have to buy additional client access licenses (CALs) for each external user because by adding them to your Active Directory you are making them an internal user.

 

Imageare

Microsoft, Google and others all offer identity providers (also known as IdPs or claims providers) that are free to use, and by federating with a third party IdP you shift the ownership and management of identities on to them. You may even find that the partner or customer you are looking to collaborate with may offer their own IdP (most likely Active Directory Federation Services if they themselves run Windows). Of course, you have to trust whichever IdP you choose; they will be responsible for authenticating the user instead of you so you must be confident that they will do a good job. You must also check what pieces of information about a user (also known as claims; for example, name, email address etc) IdPs offer to ensure they can tell you enough about a user for your purposes as they don’t all offer the same.

Having introduced support for federated authentication in SharePoint 2010, Microsoft paved the way for us to federate with third party IdPs within SharePoint itself. Unfortunately, configuring SharePoint to do this is fiddly and there is no user interface for doing so (a task made more onerous if you want to federate with multiple IdPs or tweak the configuration at a later date). Fortunately Microsoft has also introduced Azure Access Control Services (ACS) which makes the process of federating with one or more IdPs simple and easy to maintain. ACS is a cloud-based service that enables you to manage the IdPs used by your applications. The following diagram illustrates, at a high-level, the components of ACS.

An ACS namespace is a container for mappings between IdPs and one or more relying parties (the applications that want to use ACS), in our case SharePoint. Associated with each mapping is a rule group with defines how the relying party handles the individual claims associated with an identity. Using rule groups you can choose to hide or expose certain claims to specific relying parties within the namespace.

So by creating an ACS namespace you are in effect creating your own unique IdP that encapsulates the configuration for federating with one or more additional IdPs. A key point to remember is that your ACS namespace can be used by other applications (relying parties) that want to share the same identities, not just SharePoint. 

Once your ACS namespace has been created you need to configure SharePoint to trust it, which most of the time will be a one off task and from that point on you can manage and maintain the IdPs you support from within ACS. The following diagram illustrates, at a high-level, the typical architecture for integrating SharePoint and ACS.

 

In the scenario above the SharePoint web application is using two different claims providers (they are referred to as claims providers in SharePoint rather than IdPs). One is for internal users and trusts an internal AD domain and another is for external users and trusts an ACS namespace.

When a user tries to access a site within the web application they will get the default SharePoint Sign In page asking them which provider they want to use.

This page can be customised and branded as required. If the user selects Windows Authentication they will get the standard authentication dialog. If they select Azure Provider (or whatever you happen to have called your claims provider) they will be redirected to your ACS Sign In page.

Again this page can be customised and branded as required. By clicking on one of the IdPs the user will be redirected to the appropriate Sign In page. Once they have been successfully authenticated by the IdP they will be redirected back to SharePoint.

 

Conclusion

By integrating SharePoint with ACS you can simplify the process of giving external users access to SharePoint. It could also save you money in licence fees and administration costs[i].

An important point to bear in mind when planning federated authentication for SharePoint is that in order for Search to be able to index content within SharePoint, you must enable Windows authentication on at least one zone within your web application. Also, if you use a reverse proxy to perform authentication, such as Microsoft Threat Management Gateway, before allowing traffic to hit your SharePoint servers, you will need to disable the authentication checks

 

[i] The licensing model for external users differs between SharePoint 2010 and SharePoint 2013. With SharePoint 2010 if you expose your farm to external users, either anonymously or not, you have to purchase a separate licence for each server. The license covers you for any number of external users and you do not need to by a CAL for each user. With SharePoint 2013, Microsoft did away with the server license for external users and you still don’t need to buy CALs for the external users.

A Look At : The importance of people in a SharePoint project

Image

As with all other sizeable new business software implementations, a successful SharePoint deployment is one that is well thought-out and carefully managed every step of the way.

However in one key respect a SharePoint deployment is different from most others in the way it should be carried out. Whereas the majority of ERP solutions are very rigid in terms of their functionality and in the nature of the business problems they solve, SharePoint is far more of a jack-of-all-trades type of system. It’s a solution that typically spreads its tentacles across several areas within an organisation, and which has several people putting in their two cents worth about what functions SharePoint should be geared to perform.

So what is the best approach? And what makes for a good SharePoint project manager?

From my experience with SharePoint implementations, I would say first and foremost that a SharePoint deployment should be approached from a business perspective, rather than from a strictly technology standpoint. A SharePoint project delivered within the allotted time and budget can still fail if it’s executed without the broader business objectives in mind. If the project manager understands, and can effectively demonstrate, how SharePoint can solve the organisation’s real-world business problems and increase business value, SharePoint will be a welcome addition to the organisation’s software arsenal.

Also crucial is an understanding of people. An effective SharePoint project manager understands the concerns, limitations and capabilities of those who will be using the solution once it’s implemented. No matter how technically well-executed your SharePoint implementation is, it will amount to little if hardly anyone’s using the system. The objective here is to maximise user adoption and engagement, and this can be achieved by maximising user involvement in the deployment process.

 

Rather than only talk to managers about SharePoint and what they want from the system, also talk to those below them who will be using the product on a day-to-day basis. This means not only collaborating with, for example, the marketing director but also with the various marketing executives and co-ordinators.

 

It means not only talking with the human resources manager but also with the HR assistant, and so on. By engaging with a wide range of (what will be) SharePoint end-users and getting them involved in the system design process, the rate of sustained user adoption will be a lot higher than it would have been otherwise.

 

An example of user engagement in action concerns a SharePoint implementation I oversaw for an insurance company. The business wanted to improve the tracking of its documentation using a SharePoint-based records management system. Essentially the system was deployed to enhance the management and flow of health insurance and other key documentation within the organisation to ensure that the company meets its compliance obligations.

 

The project was a great success, largely because we ensured that there was a high level of end-user input right from the start. We got all the relevant managers and staff involved from the outset, we began training people on SharePoint early on and we made sure the change management part of the process was well-covered.

 

Also, and very importantly, the business value of the project was sharply defined and clearly explained from the get-go. As everyone set about making the transition to a SharePoint-driven system, they knew why it was important to the company and why it was going to be good for them too.

By contrast a follow-up SharePoint project for the company some months later was not as successful. Why? Because with that project, in which the company abandoned its existing intranet and developed a new one, the business benefits were poorly defined and were not effectively communicated to stakeholders. That particular implementation was driven by the company’s IT department which approached the project from a technical, rather than a business, perspective. User buy-in was not sought and was not achieved.

 

When the SharePoint solution went live hardly anyone used it because they didn’t see why they should. No-one had educated them on that. That’s the danger when you don’t engage all your prospective system end-users throughout every phase of a SharePoint implementation project.

As can be seen, while it is of course critical that the technical necessities of a SharePoint deployment be met, that’s only part of the picture. Without people using the system, or with people using the system to less than its maximum potential, the return on your SharePoint investment will never materialize.

Comprehensive engagement with all stakeholders, that’s where the other part of the picture comes in. That’s where a return on investment, an investment of time and effort, will most assuredly be achieved.

Features from SharePoint 2010 Integration with SAP BusinessObjects BI 4.0

ImageOne of the core concepts of Business Connectivity Services (BCS) for SharePoint 2010 are the external content types. They are reusable metadata descriptions of connectivity information and behaviours (stereotyped operations) applied to external data. SharePoint offers developers several ways to create external content types and integrate them into the platform.

 

The SharePoint Designer 2010, for instance, allows you to create and manage external content types that are stored in supported external systems. Such an external system could be SQL Server, WCF Data Service, or a .NET Assembly Connector.

This article shows you how to create an external content type for SharePoint named Customer based on given SAP customer data. The definition of the content type will be provided as a .NET assembly, and the data are displayed in an external list in SharePoint.

The SAP customer data are retrieved from the function module SD_RFC_CUSTOMER_GET. In general, function modules in a SAP R/3 system are comparable with public and static C# class methods, and can be accessed from outside of SAP via RFC (Remote Function Call). Fortunately, we do not need to program RFC calls manually. We will use the very handy ERPConnect library from Theobald Software. The library includes a LINQ to SAP provider and designer that makes our lives easier.

.NET Assembly Connector for SAP

The first step in providing a custom connector for SAP is to create a SharePoint project with the SharePoint 2010 Developer Tools for Visual Studio 2010. Those tools are part of Visual Studio 2010. We will use the Business Data Connectivity Model project template to create our project:

After defining the Visual Studio solution name and clicking the OK button, the project wizard will ask what kind of SharePoint 2010 solution you want to create. The solution must be deployed as a farm solution, not as a sandboxed solution. Visual Studio is now creating a new SharePoint project with a default BDC model (BdcModel1). You can also create an empty SharePoint project and add a Business Data Connectivity Model project item manually afterwards. This will also generate a new node to the Visual Studio Solution Explorer called BdcModel1. The node contains a couple of project files: The BDC model file (file extension bdcm), and the Entity1.cs and EntityService.cs class files.

Next, we add a LINQ to SAP file to handle the SAP data access logic by selecting the LINQ to ERP item from the Add New Item dialog in Visual Studio. This will add a file called LINQtoERP1.erp to our project. The LINQ to SAP provider is internally called LINQ to ERP. Double click LINQtoERP1.erp to open the designer. Now, drag the Function object from the designer toolbox onto the design surface. This will open the SAP connection dialog since no connection data has been defined so far:

Enter the SAP connection data and your credentials. Click the Test Connection button to test the connectivity. If you could successfully connect to your SAP system, click the OK button to open the function module search dialog. Now search for SD_RFC_CUSTOMER_GET, then select the found item, and click OK to open the RFC Function Module /BAPI dialog:

SP2010SAPToBCS/BCS12.png

The dialog provides you the option to define the method name and parameters you want to use in your SAP context class. The context class is automatically generated by the LINQ to SAP designer including all SAP objects defined. Those objects are either C# (or VB.NET) class methods and/or additional object classes used by the methods.

For our project, we need to select the export parameters KUNNR and NAME1 by clicking the checkboxes in the Pass column. These two parameters become our input parameters in the generated context class method named SD_RFC_CUSTOMER_GET. We also need to return the customer list for the given input selection. Therefore, we select the table parameter CUSTOMER_T on the Tables tab and change the structure name to Customer. Then, click the OK button on the dialog, and the new objects get added to the designer surface.

IMPORTANT: The flag “Create Objects Outside Of Context Class” must be set to TRUE in the property editor of the LINQ designer, otherwise LINQ to SAP generates the Customer class as nested class of the SAP context class. This feature and flag is only available in LINQ to SAP for Visual Studio 2010.

The LINQ designer has also automatically generated a class called Customer within the LINQtoERP1.Designer.cs file. This class will become our BDC model entity or external content type. But first, we need to adjust and rename our BDC model that was created by default from Visual Studio. Currently, the BDC model looks like this:

Rename the BdcModel1 node and file into CustomerModel. Since we already have an entity class (Customer), delete the file Entity1.cs and rename the EntityService.cs file to CustomerService.cs. Next, open the CustomerModel file and rename the designer object Entity1. Then, change the entity identifier name from Identifier1 to KUNNR. You can also use the BDC Explorer for renaming. The final adjustment result should look as follows:

SP2010SAPToBCS/BCS4.png

The last step we need to do in our Visual Studio project is to change the code in the CustomerService class. The BDC model methods ReadItem and ReadList must be implemented using the automatically generated LINQ to SAP code. First of all, take a look at the code:

SP2010SAPToBCS/BCS6.png

As you can see, we basically have just a few lines of code. All of the SAP data access logic is encapsulated within the SAP context class (see the LINQtoERP1.Designer.cs file). The CustomerService class just implements a static constructor to set the ERPConnect license key and to initialize the static variable _sc with the SAP credentials as well as the two BDC model methods.

The ReadItem method, BCS stereotyped operation SpecificFinder, is called by BCS to fetch a specific item defined by the identifier KUNNR. In this case, we just call the SD_RFC_CUSTOMER_GET context method with the passed identifier (variable id) and return the first customer object we get from SAP.

The ReadList method, BCS stereotyped operation Finder, is called by BCS to return all entities. In this case, we just return all customer objects the SD_RFC_CUSTOMER_GET context method returns. The returned result is already of type IEnumerable<Customer>.

The final step is to deploy the SharePoint solution. Right-click on the project node in Visual Studio Solution Explorer and select Deploy. This will install and deploy the SharePoint solution on the server. You can also debug your code by just setting a breakpoint in the CustomerService class and executing the project with F5.

That’s all we have to do!

Now, start the SharePoint Central Administration panel and follow the link “Manage Service Applications”, or navigate directly to the URL http://<SERVERNAME>/_admin/ServiceApplications.aspx. Click on Business Data Connectivity Service to show all the available external content types:

On this page, we find our deployed BDC model including the Customer entity. You can click on the name to retrieve more details about the entity. Right now, there is just one issue open. We need to set permissions!

Mark the checkbox for our entity and click on Set Object Permissions in the Ribbon menu bar. Now, define the permissions for the users you want to allow to access the entity, and click the OK button. In the screen shown above, the user administrator has all the permissions possible.

In the next and final step, we will create an external list based on our entity. To do this, we open SharePoint Designer 2010 and connect us with the SharePoint website.

Click on External Content Types in the Site Objects panel to display all the content types (see above). Double click on the Customer entity to open the details. The SharePoint Designer is reading all the information available by BCS.

In order to create an external list for our entity, click on Create Lists & Form on the Ribbon menu bar (see screenshot below) and enter CustomerList as the name for the external list.

OK, now we are done!

Open the list, and you should get the following result:

The external list shows all the defined fields for our entity, even though our Customer class, automatically generated by the LINQ to SAP, has more than those four fields. This means you can only display a subset of the information for your entity.

Another option is to just select those fields required within the LINQ to SAP designer. With the LINQ designer, you can access not just the SAP function modules. You can integrate other SAP objects, like tables, BW cubes, SAP Query, or IDOCs. A demo version of the ERPConnect library can be downloaded from the Theobald Software homepage.

If you click the associated link of one of the customer numbers in the column KUNNR (see screenshot above), SharePoint will open the details view:

SP2010SAPToBCS/BCS10.png

 

 

How To : A library to create .mht files (available at request)

There are a number of ways to do this, including hosting Word or Excel on the Web Server and dealing with COM Interop issues, or purchasing third – party MIME encoding libraries, some of which sell for $250.00 or more. But, there is no native .NET solution. So, being the curious soul that I am, I decided to investigate a bit and see what I could come up with. Internet Explorer offers a File / Save As option to save a web page as “Web Archive, single file (*.mht)”.

Image

What this does is create an RFC – compliant Multipart MIME Message. Resources such as images are serialized to their Base64 inline encoding representations and each resource is demarcated with the standard multipart MIME header – breaks. Internet Explorer, Word, Excel and most newsreader programs all understand this format. The format, if saved with the file extension “.eml”, will come up as a web page inside Outlook Express; if saved with “.mht”, it will come up in Internet Explorer when the file is double-clicked out of Windows Explorer, and — what many do not know — if saved with a “*.doc” extension, it will load in MS Word, each with all the images intact, and in the case of the EML and MHT formats, with all of the hyperlinks fully-functioning. The primary advantage of the format is, of course, that all the resources can be consolidated into a single file,. making distribution and archiving much easier — including database storage in an NVarchar or NText type field.

 

System.Web.Mail, which .NET provides as a convenient wrapper around the CDO for Windows COM library, offers only a subset of the functionality exposed by the CDO library, and multipart MIME encoding is not a part of that functionality. However, through the wonders of COM Interop, we can create our own COM reference to CDO in the Visual Studio IDE, allowing it to generate a Runtime Callable Wrapper, and help ourselves to the entire rich set of functionality of CDO as we see fit.

 

One method in the CDO library that immediately came to my notice was the CreateMHTMLBody method. That’s MHTMLBody, meaning “Multipurpose Internet Mail Extension HTML (MHTML) Body”. Well!– when I saw that, my eyes lit up like the LED’s on a 32 – way Unisys box! This is a method on the CDO Message class; the method accepts a URI to the requested resource, along with some enumerations, and creates a MultiPart MIME – encoded email message out of the requested URI responses — including images, css and script — in one fell swoop.

 

“Ah”, you say, “How convenient”! Yes, and not only that, but we also get a free “multipart COM Interop Baggage” reference to the ADODB.Stream object – and by simply calling the GetStream method on the Message Class, and then using the Stream’s SaveToFile method, we can grab any resource including images, javascript, css and everything else (except video) and save it to a single MHT Web Archive file just as if we chose the “Save As” option out of Internet Explorer.

 

If we choose not to save the file, but instead want to get back the stream contents, no problem. We just call Stream.ReadText(Stream.Size) and it returns a string containing the entire MHT encoded content. At that point we can do whatever we want with it – set a content – header and Response .Write the content to the browser, for instance — or whatever.

 

For example, when we get back our “MHT” string, we can write the following code:

Response.ContentType=”application/msword”;
Response.AddHeader( “Content-Disposition”, “attachment;filename=NAME.doc”);
Response.Write(myDataString);

 

— and the browser will dutifully offer to save the file as a Word Document. It will still be Multipart MIME encoded, but the .doc extension on the filename allows Word to load it, and Word is smart enough to be able to parse and render the file very nicely. “Ah”, you are saying, “this is nice, and so is the price!”. Yup!

And, if you are serving this MIME-encoded file from out of your database, for example, and you would like it to be able to be displayed in the browser, just change the “NAME.doc” to “NAME.MHT”, and don’t set a content-type header. Internet Explorer will prompt the user to either save or open the file. If they choose “open”, it will be saved to the IE Temporary files and open up in the browser just as if they had loaded it from their local file system.

 

So, to answer a couple of questions that came up recently, yes — you can use this method to MHTML – encode any web page – even one that is dynamically generated as with a report — provided it has a URL, and save the MIME-encoded content as a string in either an NVarchar or NText column in your database. You can then bring this string back out and send it to the browser, images,css, javascript and all.

Now here is the code for a small, very basic “Converter” class I’ve written to take advantage of the two scenarios specified above. Bear in mind, there is much more available in CDO, but I leave this wondrous trail of ecstatic discovery to your whims of fancy:

using System;
using System.Web;
using CDO;
using ADODB;
using System.Text;
namespace PAB.Web.Utils
{
 public class MIMEConverter
 {
  //private ctor as our methods are all static here
  private MIMEConverter()
  {
   
  }   
  public static bool SaveWebPageToMHTFile( string url, string filePath)
  {
   bool result=false;
   CDO.Message  msg = new CDO.MessageClass(); 
   ADODB.Stream  stm=null ;
   try
   {
    msg.MimeFormatted =true;   
    msg.CreateMHTMLBody(url,CDO.CdoMHTMLFlags.cdoSuppressNone, "" ,"" );
stm = msg.GetStream(); stm.SaveToFile(filePath,ADODB.SaveOptionsEnum.adSaveCreateOverWrite); msg=null; stm.Close(); result=true; } catch {throw;} finally { //cleanup here } return result; } public static string ConvertWebPageToMHTString( string url ) { string data = String.Empty; CDO.Message msg = new CDO.MessageClass(); ADODB.Stream stm=null; try { msg.MimeFormatted =true; msg.CreateMHTMLBody(url,CDO.CdoMHTMLFlags.cdoSuppressNone,
"", "" );
stm = msg.GetStream(); data= stm.ReadText(stm.Size); } catch { throw; } finally { //cleanup here } return data; } } }

 

NOTE: When using this type of COM Interop from an ASP.NET web page, it is important to remember that you must set the AspCompat=”true” directive in the Page declaration or you will be very disappointed at the results! This forces the ASP.NET page to run in STA threading model which permits “classic ASP” style COM calls. There is, of course, a significant performance penalty incurred, but realistically, this type of operation would only be performed upon user request and not on every page request.

<

p align=”left”>The downloadable zip file below contains the entire class library and a web solution that will exercise both methods when you fill in a valid URI with protocol, and a valid file path and filename for saving on the server. Unzip this to a folder that you have named “ConvertToMHT” and then mark the folder as an IIS Application so that your request such as “http://localhost/ConvertToMHT/WebForm1.aspx&#8221; will function correctly. You can then load the Solution file and it should work “out of the box”. And, don’t forget – if you have an ASP.NET web application that wants to write a file to the file system on the server, it must be running under an identity that has been granted this permission.

How To : Use JSON and SAP NetWeaver together

Background

Imagesap2[1]
In this example, SAP is used as the backend data source and the NWGW (Netweaver Gateway) adapter to consumable from .NET client as OData format.

Since the NWGW component is hosted on premise and our .NET client is hosted in Azure, we are consuming this data from Azure through the Service Bus relay. While transferring data from on premise to Azure over SB relay, we are facing performance issues for single user for large volumes of data as well as in relatively small data for concurrent users. So I did some POC for improving performance by consuming the OData service in JSON format.

What I Did?

I’ve created a simple WCF Data Service which has no underlying data source connectivity. In this service when the context is initializing, a list of text messages is generated and exposed as OData.

Here is that simple service code:

[Serializable]
public class Message
{
public int ID { get; set; }
public string MessageText { get; set; }
}
public class MessageService
{
List<Message> _messages = new List<Message>();
public MessageService()
{
for (int i = 0; i < 100; i++)
{
Message msg = new Message
{
ID = i,
MessageText = string.Format(“My Message No. {0}”, i)
};
_messages.Add(msg);

}
}
public IQueryable<Message> Messages
{
get
{
return _messages.AsQueryable<Message>();
}
}
}
[ServiceBehavior(IncludeExceptionDetailInFaults = true)]
public class WcfDataService1 : DataService
{
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
// TODO: set rules to indicate which entity sets
// and service operations are visible, updatable, etc.
// Examples:
config.SetEntitySetAccessRule(“Messages”, EntitySetRights.AllRead);
config.SetServiceOperationAccessRule(“*”, ServiceOperationRights.All);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V3;
}
}
Exposing one endpoint to Azure SB so that client can consume this service through SB endpoint. After hosting the service, I’m able to fetch data by simple OData query from browser.

I’m also able to fetch the data in JSON format.

After that, I create a console client application and consume the service from there.

Sample Client Code

class Program
{
static void Main(string[] args)
{
List lst = new List();

for (int i = 0; i < 100; i++)
{
Thread person = new Thread(new ThreadStart(MyClass.JsonInvokation));
person.Name = string.Format(“person{0}”, i);
lst.Add(person);
Console.WriteLine(“before start of {0}”, person.Name);
person.Start();
//Console.WriteLine(“{0} started”, person.Name);
}
Console.ReadKey();
foreach (var item in lst)
{
item.Abort();
}
}
}

public class MyClass
{
public static void JsonInvokation()
{
string personName = Thread.CurrentThread.Name;
Stopwatch watch = new Stopwatch();
watch.Start();
try
{
SimpleService.MessageService svcJson =
new SimpleService.MessageService(new Uri
(“https://abc.servicebus.windows.net/SimpleService /WcfDataService1”));
svcJson.SendingRequest += svc_SendingRequest;
svcJson.Format.UseJson();
var jdata = svcJson.Messages.ToList();

watch.Stop();
Console.WriteLine(“Person: {0} – JsonTime First Call time: {1}”,
personName, watch.ElapsedMilliseconds);

for (int i = 1; i <= 10; i++)
{
watch.Reset(); watch.Start();
jdata = svcJson.Messages.ToList();
watch.Stop();
Console.WriteLine(“Person: {0} – Json Call {1} time:
{2}”, personName, 1 + i, watch.ElapsedMilliseconds);
}

Console.WriteLine(jdata.Count);
}
catch (Exception ex)
{
Console.WriteLine(personName + “: ” + ex.Message);
}
Thread.Sleep(100);
}

public static void AtomInvokation()
{
string personName = Thread.CurrentThread.Name;

try
{
Stopwatch watch = new Stopwatch();
watch.Start();
SimpleService.MessageService svc =
new SimpleService.MessageService(new Uri
(“https://abc.servicebus.windows.net/SimpleService/WcfDataService1&#8221;));
svc.SendingRequest += svc_SendingRequest;
var data = svc.Messages.ToList();

watch.Stop();
Console.WriteLine(“Person: {0} – XmlTime First Call time: {1}”,
personName, watch.ElapsedMilliseconds);

for (int i = 1; i <= 10; i++)
{
watch.Reset(); watch.Start();
data = svc.Messages.ToList();
watch.Stop();
Console.WriteLine(“Person: {0} – Xml Call {1} time:
{2}”, personName, 1 + i, watch.ElapsedMilliseconds);
}

Console.WriteLine(data.Count);
}
catch (Exception ex)
{
Console.WriteLine(personName + “: ” + ex.Message);
}
Thread.Sleep(100);
}
}9pt;”>

 

What I Test After That
I tested two separate scenarios:

Scenario I: Single user with small and large volume of data
Measuring the data transfer time periodically in XML format and then JSON format. You might notice that first call I’ve printed separately in each screen shot as it is taking additional time to connect to SB endpoint. In the first call, the secret key authentication is happening.

Small data set (array size 10): consume in XML format.

 

Consume in JSON format:

 

For small set of data, Json and XML response time over service bus relay is almost same.

Consuming Large volume of data (Array Size 100)

 

Here the XML message size is around 51 KB. Now I’m going to consume the same list of data (Array size 100) in JSON format.

 

So from the above test scenario, it is very clear that JSON response time is much faster than XML response time and the reason for that is message size. In this test, when I’m getting the list of 100 records in XML format message size is 51.2 KB but JSON message size is 4.4 KB.

Scenario II: 100 Concurrent user with large volume of data (array size 100)
In this concurrent user load test, I’ve done any service throttling or max concurrent connection configuration.

 

In the above screen shot, you will find some time out error that I’m getting in XML response. And it is happening due to high response time over relay. But when I execute the same test with JSON response, I found the response time is quite stable and faster than XML response and I’m not getting any time out.

 

How Easy to Use UseJson()
If you are using WCF Data Service 5.3 and above and VS2012 update 3, then to consume the JSON structure from the client, I have to instantiate the proxy / context with .Format.UseJson().

Here you don’t need to load the Edmx structure separately by writing any custom code. .NET CodeGen will generate that code when you add the service reference.

 

But if that code is not generated from your environment, then you have to write a few lines of code to load the edmx and use it as .Format.UseJson(LoadEdmx());

Sample Code for Loading Edmx

public static IEdmModel LoadEdmx(string srvName)
{
string executionPath = Directory.GetCurrentDirectory();
DirectoryInfo di = new DirectoryInfo(executionPath).Parent;
var parent1 = di.Parent;
var srv = parent1.GetDirectories(“Service References\\” +
srvName)[0].GetFiles(“service.edmx”)[0].FullName;

XmlDocument doc = new XmlDocument();
doc.Load(srv);
var xmlreader = XmlReader.Create(new StringReader(doc.DocumentElement.OuterXml));

IEdmModel edmModel = EdmxReader.Parse(xmlreader);
return edmModel;
}

DRY Architecture, Layered Architecture, Domain Driven Design and a Framework to build great Single Web Pages – BiolerPlate Part 1

DRY – Don’t Repeat Yourself! is one of the main ideas of a good developer while developing a software. We’re trying to implement it from simple methods to classes and modules. What about developing a new web based application? We, software developers, have similar needs when developing enterprise web applications.

Enterprise web applications need login pages, user/role management infrastructure, user/application setting management, localization and so on. Also, a high quality and large scale software implements best practices such as Layered Architecture, Domain Driven Design (DDD), Dependency Injection (DI). Also, we use tools for Object-Releational Mapping (ORM), Database Migrations, Logging… etc. When it comes to the User Interface (UI), it’s not much different.

Starting a new enterprise web application is a hard work. Since all applications need some common tasks, we’re repeating ourselves. Many companies are developing their own Application Frameworks or Libraries for such common tasks to do not re-develop same things. Others are copying some parts of existing applications and preparing a start point for their new application. First approach is pretty good if your company is big enough and has time to develop such a framework.

As a software architect, I also developed such a framework im my company. But, there is some point it feels me bad: Many company repeats same tasks. What if we can share more, repeat less? What if DRY principle is implemented universally instead of per project or per company? It sounds utopian, but I think there may be a starting point for that!

What is ASP.NET Boilerplate?

http://www.aspnetboilerplate.com/

ASP.NET Boilerplate [1] is a starting point for new modern web applications using best practices and most popular tools. It’s aimed to be a solid model, a general-purpose application framework and a project template. What it does?

  • Server side
    • Based on latest ASP.NET MVC and Web API.
    • Implements Domain Driven Design (Entities, Repositories, Domain Services, Application Services, DTOs, Unif Of Work… and so on)
    • Implements Layered Architecture (Domain, Application, Presentation and Infrastructure Layers).
    • Provides an infrastructure to develop reusable and composable modules for large projects.
    • Uses most popular frameworks/libraries as (probably) you’re already using.
    • Provides an infrastructure and make it easy to use Dependency Injection (uses Castle Windsor as DI container).
    • Provides a strict model and base classes to use Object-Releational Mapping easily (uses NHibernate, can work with many DBMS).
    • Implements database migrations (uses FluentMigrator).
    • Includes a simple and flexible localization system.
    • Includes an EventBus for server-side global domain events.
    • Manages exception handling and validation.
    • Creates dynamic Web API layer for application services.
    • Provides base and helper classes to implement some common tasks.
    • Uses convention over configuration principle.
  • Client side
    • Provides two project templates. One for Single-Page Applications using Durandaljs, other one is a Multi-Page Application. Both templates uses Twitter Bootstrap.
    • Most used libraries are included by default: Knockout.js, Require.js, jQuery and some useful plug-ins.
    • Creates dynamic javascript proxies to call application services (using dynamic Web API layer) easily.
    • Includes unique APIs for some sommon tasks: showing alerts & notifications, blocking UI, making AJAX requests.

Beside these common infrastructure, the “Core Module” is being developed. It will provide a role and permission based authorization system (implementing ASP.NET Identity Framework), a setting systems and so on.

What ASP.NET Boilerplate is not?

ASP.NET Boilerplate provides an application development model with best practices. It has base classes, interfaces and tools that makes easy to build maintainable large-scale applications. But..

  • It’s not one of RAD (Rapid Application Development) tools those provide infrastructure for building applications without coding. Instead, it provides an infrastructure to code in best practices.
  • It’s not a code generation tool. While it has several features those build dynamic code in run-time, it does not generate codes.
  • It’s not a all-in-one framework. Instead, it uses well known tools/libraries for specific tasks (like NHibernate for O/RM, Log4Net for logging, Castle Windsor as DI container).

Getting started

In this article, I’ll show how to deleveop a Single-Page and Responsive Web Application using ASP.NET Boilerplate (I’ll call it as ABP from now). This sample application is named as “Simple Task System” and it consists of two pages: one for list of tasks, other one is to add new tasks. A Task can be related to a person, can be completed. The application is localized in two languages. Screenshot of Task List in the application is shown below:

A screenshot of 'Simple Task System'

Creating empty web application from template

ABP provides two templates to start a new project (Even if you can manually create your project and get ABP packages from nuget, template way is much more easy). Go to www.aspnetboilerplate.com/Templates to create your application from one of twotemplates (one for SPA (Single-Page Application), one for MPA (classic, Multi-Page Application) projects):

Creating template from ABP web site

I named my project as SimpleTaskSystem and created a SPA project. It downloaded project as a zip file. When I open the zip file, I see a solution is ready that contains assemblies (projects) for each layer of Domain Driven Design:

Project files

Created project’s runtime is .NET Framework 4.5.1, I advice to open with Visual Studio 2013. The only prerequise to be able to run the project is to create a database. SPA template assumes that you’re using SQL Server 2008 or later. But you can change it easily to another DBMS.

See the connection string in web.config file of the web project:

<add name="MainDb" connectionString="Server=localhost; Database=SimpleTaskSystemDb; Trusted_Connection=True;" />

You can change connection string here. I don’t change the database name, so I’m creating an empty database, named SimpleTaskSystemDb, in SQL Server:

Empty database

That’s it, your project is ready to run! Open it in VS2013 and press F5:

First run

Template consists of two pages: One for Home page, other is About page. It’s localized in English and Turkish. And it’s Single-Page Application! Try to navigate between pages, you’ll see that only the contents are changing, navigation menu is fixed, all scripts and styles are loaded only once. And it’s responsive. Try to change size of the browser.

Now, I’ll show how to change the application to a Simple Task System application layer by layer in the coming part 2

FREE Microsoft Dynamics CRM 2011 List Component for Microsoft SharePoint Server 2010

 

 

CRM2011 – SharePoint 2010 Integration? Glue CRM 2011 & Share Point 2010 together? Make CRM 2011 and Share Point 2010 converse? I wasn’t sure what to call this exactly. “Hooking together” works for me!

Now that we have a CRM 2011 instance and a Share Point site working, let’s get them connected up! Go to this website and download Microsoft Dynamics CRM 2011 List Component for Microsoft SharePoint Server 2010:

Accept the License Terms.

Extract the files to a folder (I chose C:\CRM List).

You will get a prompt “The Installation is complete.” Click OK.

Let’s go over to the Share Point Central Administration Server to install the list component we just extracted. Connect to http://localhost:48835/ (your port might be different, be aware of this). Click Manage web applications.

Click the new Share Point site, and then “General Settings” (the blue cogs).

Scroll down to Browser File Handling and choose Permissive, Click OK.

Let’s head back over to our new Share Point Site. Click Site Actions up top left, and then “Site Settings”.

Under Galleries click “Solutions”.


Click the Word “Solutions” up top (you have to click the word “Solutions”, even though it looks selected), and then click “Upload Solution”.

Select the .wsp component that we extracted wayyy back at the top of this. I used C:CRM List as my extract folder. Click OK.

You’ll get prompted at this point, I couldn’t active the control on this screen (but it still needs to be done). We need to make sure some services are running to activate the solution. Click Close.

Head back to the Share Point Central Administration. http://localhost:48835. Found at

Click System Settings –> Manage Services on this server

Click Start beside “Share Point Foundation Sandboxed Code Service”. I also started “Microsoft SharePoint Foundation Subscription Settings Service (by accident)” so that’s why that ones started.

Now to head back to our Share Point site http://localhost:39083/

Under Galleries click “Solutions”.

Click Solutions again, select crmlistcomponent, and the click “Activate” up top. Activate is now un-greyed out! Click Activate!

The solution has now been activated! Hurray!

There seems to be some confusion whether or not you need to run a power shell script to enable Activation of Share Point 2010 solutions (AllowHtcExtn). According to what I’ve read, you would need to run this if Share Point 2010 is running on a domain controller. I didn’t have to do this (and we’re on a domain controller), and I’ve yet to run into a problem with .htc stuff. Even in the Microsoft Dynamics CRM 2011 Readme it says:
“If you are using Microsoft SharePoint Server 2010 (On-Premises), you must add .htc extensions to the list of allowed file types:
a. Copy the AllowHtcExtn.ps1 script file to the server that is running Microsoft SharePoint Server 2010.
b. In the Windows PowerShell window or in the SharePoint Management Console, run the command: AllowHtcExtn.ps1 .
Example: AllowHtcExtn.ps1 http://servername%E2%80%9D

Some people say the script works for them , and some say that using just the blog method (what we did) works
The sharepoint configuration is complete at this point. You probably want to take a snapshot, name it “After Sharepoint Configuration”. Let’s head over to our CRM server (localhost:85).

In CRM Click Settings –> Document Management –> Document Management Settings

Select the entities that you want to have documents enabled on. This will create a “Documents” area when you open an instance of the entity. I’ll just leave the defaults for now. At the bottom punch in your Share Point site that you’ve created and click Next. This is the Share Point server we installed the list component on. You’re not allowed to use localhost:port, just use the computer name:port like below.

Don’t select the box, otherwise it will relate the files to those entities. Without checking the box you will end up with something like Site/EntityName/Record Name (which is what I want, especially if you’re using custom entities). Click Next.

If “Libraries are being created in the path”, click Next.

Everything should “Succeed”, Click Finish.

Let’s test this bad boy out now.

Create a new account called “Test”.

Click Save! Click “Documents” on the left side. You’ll get a prompt saying that the folder (Test) is being created under “Account”. Click OK.

Click Add.

Now you’ll probably get these errors! /crmgrid/scripts/DialogContainer.js and 403 FORBIDDEN! Depressing. The only real information on this error was here: . It wasn’t very clear, but I stumbled through it. It seems that CRM 2011 doesn’t enjoy being called localhost. Let’s fix these up.

The fix for this was to run inetmgr –> Click Microsoft Dynamics CRM –> click Stop

Click “Bindings…” on the right side. Click “Edit” on the items that show “localhost” and change it to my machine name: “win-b80icqrvluf”. This is so it has a a “real” name to connect to.

Before:

After:

Now click “Start” on the right side.

Head back over to the CRM (http://win-b80icqrvluf:85/CRMTest/main.aspx) make sure to use the host name, as it might give you the error if you use localhost. Open your Test Account again.

Click Documents –> Add, you should now see this popup (it can take a while to load for the first time on the VM). If you continue to get the error, stop both CRM 2011 and Share Point 2010 servers and restart them. If that doesn’t work, try restarting the whole server.

Pick a file, and click OK.

The file should be uploaded to Share Point now.

Head over to Share Point at http://win-b80icqrvluf:39083 and click “All Site Content” or “Libraries”.

Click Account.

You can see that CRM has created a folder “Test” (for our record). It creates 1 folder per record. Click it to see the files associated to that record!!

The files associated to the record “Test” in Accounts.

Share Point and CRM have combined into a super awesome force of doom. But we’re still missing 1 core piece of functionality (due to not picking a port when we installed CRM).

 

 

Architecture and Practical Application – BizTalk Adapter for mySAP Business Suite

Architecture for BizTalk Adapter for mySAP Business Suite

f36f4-netvjava

The Microsoft BizTalk Adapter for mySAP Business Suite implements a Windows Communication Foundation (WCF) custom binding, which contains a single custom transport binding element that enables communication with an SAP system.

biztalk-accelerator

The SAP adapter is wrapped by the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK runtime and is exposed to applications through the WCF channel architecture. The SAP adapter communicates with the SAP system through either the 64-bit or 32-bit version of the SAP Unicode RFC SDK (librfc32u.dll).

The following figure shows the end-to-end architecture for solutions that are developed by using the SAP adapter.
SAP End-to-End Architecture
Consuming the Adapter

The SAP adapter exposes the SAP system as a WCF service to client applications. Client applications exchange SOAP messages with the SAP adapter through WCF channels to perform operations and to access data on the SAP system. The preceding figure shows four ways in which the SAP adapter can be consumed.

image
• Through a WCF channel application that performs operations on the SAP system by using the WCF channel model to exchange SOAP messages directly with the SAP adapter. For more information about developing solutions for the SAP adapter by using WCF channel model programming, see Developing Applications by Using the WCF Channel Model.

• Through a WCF service model application that calls methods on a WCF client to perform operations on the SAP system. A WCF client models the operations exposed by the SAP adapter as .NET methods. You can use the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK or the svcutil.exe tool to create a WCF client class from metadata exposed by the SAP adapter. For more information about WCF service model programming and the SAP adapter, see Developing Applications by Using the WCF Service Model.

• Through a BizTalk port that is configured to use the BizTalk WCF-Custom adapter with the SAP Binding configured as the binding for the WCF-Custom transport type in a BizTalk Server application. The BizTalk WCF-Custom adapter enables communication between a BizTalk Server application and a WCF service.

The BizTalk WCF-Custom adapter supports custom WCF bindings through its WCF-Custom transport type, which enables you to configure any WCF binding exposed to the configuration system as the binding used by the BizTalk WCF-Custom adapter. For more information about how to use the SAP adapter in BizTalk Server solutions, see Developing BizTalk Applications. BizTalk transactions are supported by the BizTalk Layered Channel binding element which can be loaded by setting a binding property on the SAP Binding.

• Through an IIS-hosted Web service. In this scenario, the SAP adapter is exposed through a WCF Service proxy, which is hosted in IIS by using one of the standard WCF HTTP bindings.

• Through the .NET Framework Data Provider for mySAP Business Suite. The Data Provider for SAP runs on top of the SAP adapter and provides an ADO.NET interface to an SAP system.

The SAP adapter and the SAP RFC library are always hosted in-process with the application or service that consumes the adapter.

The SAP Adapter and WCF

WCF presents a programming model based on the exchange of SOAP messages over channels between clients and services. These messages are sent between endpoints exposed by a communicating client and service.

An endpoint consists of an endpoint address which specifies the location at which messages are received, a binding which specifies the communication protocols used to exchange messages, and a contract which specifies the operations and data types exposed by the endpoint.

A binding consists of one or more binding elements that stack on top of each other to define how messages are exchanged with the endpoint.

 

At a minimum, a binding must specify the transport and encoding used to exchange messages with the endpoint. Message exchange between endpoints occurs over a channel stack that is composed of one or more channels. Each channel is a concrete implementation of one of the binding elements in the binding configured for the endpoint.

For more information about WCF and the WCF programming model, see “Windows Communication Foundation” at http://go.microsoft.com/fwlink/?LinkId=89726.

The Microsoft BizTalk Adapter for mySAP Business Suite exposes a WCF custom binding, the SAP Binding (Microsoft.Adapters.SAP.SAPBinding). By default, this binding contains a single custom transport binding element, the SAP Adapter Binding Element (Microsoft.Adapters.SAP.SAPAdapter), which enables operations on an SAP system. When using the SAP adapter with BizTalk Server, you can set the EnableBizTalkCompatibilityMode binding property to load a custom binding element, the BizTalk Layered Channel Binding Element, on top of the SAP Adapter Binding Element. The BizTalk Layered Channel Binding Element is implemented internally by the SAP adapter and is not exposed outside the SAP Binding.

Microsoft.Adapters.SAP.SAPBinding (the SAP Binding) and Microsoft.Adapters.SAP.SAPAdapter (the SAP Adapter Binding Element) are public classes and are also exposed to the configuration system. Because the SAP Adapter Binding Element is exposed publicly, you can build your own custom WCF bindings capable of extending the functionality of the SAP adapter. For example, you could implement a custom binding to support Enterprise Single Sign-On (SSO) in a WCF channel or a WCF service model programming solution, to aggregate database operations into a single multifunction operation, or to perform schema transformation between operations implemented by a custom application and operations on the SAP system.

The SAP adapter is built on top of the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK and runs on top of the WCF LOB Adapter SDK runtime. The WCF LOB Adapter SDK provides a software framework and tooling infrastructure that the SAP adapter leverages to provide a rich set of features to users and adapter clients.

The Connection to the SAP System

The SAP adapter connects with the SAP system through the SAP Unicode RFC SDK Library (librfc32u.dll). The SAP adapter supports both the 32 bit and the 64 bit versions of the SAP RFC SDK. The SAP RFC SDK enables external programs to call ABAP functions on a SAP system.

You establish a connection to an SAP system by providing a connection URI to the SAP adapter. The SAP adapter supports the following kinds of connections to an SAP system:
• An application host–based connection (A), in which the SAP adapter connects directly to an SAP application server.

• A load balancing connection (B), in which the SAP adapter connects to an SAP messaging server.

• A destination-based connection (D), in which the connection to the SAP system is specified by a destination in the saprfc.ini configuration file. A, B, and R type connections are supported.

• A listener connection (R), in which the adapter receives RFCs, tRFC and IDOCs through an RFC Destination on the SAP system that is specified by a listener host, a listener gateway service, and a listener program ID, either directly in the connection URI or by an R-based destination in the saprfc.ini configuration file.

Architecture for BizTalk Adapter for mySAP Business Suite

This topic has not yet been rated – Rate this topic

The Microsoft BizTalk Adapter for mySAP Business Suite implements a Windows Communication Foundation (WCF) custom binding, which contains a single custom transport binding element that enables communication with an SAP system. The SAP adapter is wrapped by the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK runtime and is exposed to applications through the WCF channel architecture. The SAP adapter communicates with the SAP system through either the 64-bit or 32-bit version of the SAP Unicode RFC SDK (librfc32u.dll). The following figure shows the end-to-end architecture for solutions that are developed by using the SAP adapter.
SAP End-to-End Architecture
Consuming the Adapter

The SAP adapter exposes the SAP system as a WCF service to client applications. Client applications exchange SOAP messages with the SAP adapter through WCF channels to perform operations and to access data on the SAP system. The preceding figure shows four ways in which the SAP adapter can be consumed.
• Through a WCF channel application that performs operations on the SAP system by using the WCF channel model to exchange SOAP messages directly with the SAP adapter. For more information about developing solutions for the SAP adapter by using WCF channel model programming, see Developing Applications by Using the WCF Channel Model.

• Through a WCF service model application that calls methods on a WCF client to perform operations on the SAP system. A WCF client models the operations exposed by the SAP adapter as .NET methods. You can use the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK or the svcutil.exe tool to create a WCF client class from metadata exposed by the SAP adapter. For more information about WCF service model programming and the SAP adapter, see Developing Applications by Using the WCF Service Model.

• Through a BizTalk port that is configured to use the BizTalk WCF-Custom adapter with the SAP Binding configured as the binding for the WCF-Custom transport type in a BizTalk Server application. The BizTalk WCF-Custom adapter enables communication between a BizTalk Server application and a WCF service.

The BizTalk WCF-Custom adapter supports custom WCF bindings through its WCF-Custom transport type, which enables you to configure any WCF binding exposed to the configuration system as the binding used by the BizTalk WCF-Custom adapter. For more information about how to use the SAP adapter in BizTalk Server solutions, see Developing BizTalk Applications.

BizTalk transactions are supported by the BizTalk Layered Channel binding element which can be loaded by setting a binding property on the SAP Binding.

• Through an IIS-hosted Web service. In this scenario, the SAP adapter is exposed through a WCF Service proxy, which is hosted in IIS by using one of the standard WCF HTTP bindings.

• Through the .NET Framework Data Provider for mySAP Business Suite. The Data Provider for SAP runs on top of the SAP adapter and provides an ADO.NET interface to an SAP system.

The SAP adapter and the SAP RFC library are always hosted in-process with the application or service that consumes the adapter.

The SAP Adapter and WCF

WCF presents a programming model based on the exchange of SOAP messages over channels between clients and services. These messages are sent between endpoints exposed by a communicating client and service.

An endpoint consists of an endpoint address which specifies the location at which messages are received, a binding which specifies the communication protocols used to exchange messages, and a contract which specifies the operations and data types exposed by the endpoint. A binding consists of one or more binding elements that stack on top of each other to define how messages are exchanged with the endpoint.

At a minimum, a binding must specify the transport and encoding used to exchange messages with the endpoint. Message exchange between endpoints occurs over a channel stack that is composed of one or more channels. Each channel is a concrete implementation of one of the binding elements in the binding configured for the endpoint.

For more information about WCF and the WCF programming model, see “Windows Communication Foundation” at http://go.microsoft.com/fwlink/?LinkId=89726.

The Microsoft BizTalk Adapter for mySAP Business Suite exposes a WCF custom binding, the SAP Binding (Microsoft.Adapters.SAP.SAPBinding). By default, this binding contains a single custom transport binding element, the SAP Adapter Binding Element (Microsoft.Adapters.SAP.SAPAdapter), which enables operations on an SAP system. When using the SAP adapter with BizTalk Server, you can set the EnableBizTalkCompatibilityMode binding property to load a custom binding element, the BizTalk Layered Channel Binding Element, on top of the SAP Adapter Binding Element. The BizTalk Layered Channel Binding Element is implemented internally by the SAP adapter and is not exposed outside the SAP Binding.

Microsoft.Adapters.SAP.SAPBinding (the SAP Binding) and Microsoft.Adapters.SAP.SAPAdapter (the SAP Adapter Binding Element) are public classes and are also exposed to the configuration system. Because the SAP Adapter Binding Element is exposed publicly, you can build your own custom WCF bindings capable of extending the functionality of the SAP adapter. For example, you could implement a custom binding to support Enterprise Single Sign-On (SSO) in a WCF channel or a WCF service model programming solution, to aggregate database operations into a single multifunction operation, or to perform schema transformation between operations implemented by a custom application and operations on the SAP system.

The SAP adapter is built on top of the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK and runs on top of the WCF LOB Adapter SDK runtime. The WCF LOB Adapter SDK provides a software framework and tooling infrastructure that the SAP adapter leverages to provide a rich set of features to users and adapter clients.

The Connection to the SAP System

The SAP adapter connects with the SAP system through the SAP Unicode RFC SDK Library (librfc32u.dll). The SAP adapter supports both the 32 bit and the 64 bit versions of the SAP RFC SDK. The SAP RFC SDK enables external programs to call ABAP functions on a SAP system.

You establish a connection to an SAP system by providing a connection URI to the SAP adapter. The SAP adapter supports the following kinds of connections to an SAP system:
• An application host–based connection (A), in which the SAP adapter connects directly to an SAP application server.

• A load balancing connection (B), in which the SAP adapter connects to an SAP messaging server.

• A destination-based connection (D), in which the connection to the SAP system is specified by a destination in the saprfc.ini configuration file. A, B, and R type connections are supported.

• A listener connection (R), in which the adapter receives RFCs, tRFC and IDOCs through an RFC Destination on the SAP system that is specified by a listener host, a listener gateway service, and a listener program ID, either directly in the connection URI or by an R-based destination in the saprfc.ini configuration file.

So – How Do I Use a Custom Web Part?

This topic has not yet been rated – Rate this topic

This section provides information about using a custom Web Part with Microsoft Office SharePoint Server. To use a custom Web Part, you must do the following:
1. Create a custom Web Part

  1. Deploy the custom Web Part to a SharePoint portal
  2. Configure the SharePoint portal to use the custom Web Part

Before You Begin

Before you create a custom Web Part:
• Publish the SAP artifacts as a WCF service. For more information, see Step 1: Publish the SAP Artifacts as a WCF Service in Tutorial 1: Presenting Data from an SAP System on a SharePoint Site.

• Create an application definition file for the SAP artifacts using the Business Data Catalog in Microsoft Office SharePoint Server. For more information, see Step 2: Create an Application Definition File for the SAP Artifacts in Tutorial 1: Presenting Data from an SAP System on a SharePoint Site.

Step 1: Create a custom Web Part

To create a custom Web Part using Visual Studio, do the following:
1. Start Visual Studio, and then create a project.

  1. In the New Project dialog box, from the Project types pane, select Visual C#. From the Templates pane, select Class Library.
  • Specify a name and location for the solution. For this topic, specify CustomWebPart in the Name and Solution Name boxes. Specify a location, and then click OK.

  • Add a reference to the System.Web component into the project. Right-click the project name in Solution Explorer, and then click Add Reference. In the Add Reference dialog box, select System.Web in the .NET tab, and then click OK. The System.Web component contains the required namespace of System.Web.UI.WebControls.WebParts.

  • Add the required code based on your issue in the project. For the code sample that is relevant to a certain issue, see “Issues Involving Custom Web Parts” in Considerations While Using the SAP Adapter with Microsoft Office SharePoint Server.

  • Build the project. On successful build of the project, a .dll file, CustomWebPart.dll, will be generated in the /bin/Debug folder.

  • Only for 64-bit computer: Sign the CustomWebPart.dll file with a strong name before performing the following steps. Otherwise, you will not be able to import, and hence use the CustomWebPart.dll in the SharePoint portal in “Step 3: Configure the SharePoint Portal to use the custom Web Part.” For information about how to sign an assembly with a strong name, see http://go.microsoft.com/fwlink/?LinkId=197171.

  • Step 2: Deploy the custom Web Part to a SharePoint Portal

    You must do the following to make the CustomWebPart.dll file (custom Web Part) that is created in “Step 1: Create a custom Web Part” of this topic usable on the SharePoint portal:
    • Copy the CustomWebPart.dll file to the bin folder of the SharePoint Portal: Microsoft Office SharePoint Server creates portals under the :\Inetpub\wwwroot\wss\VirtualDirectories folder. A folder is created for each portal, and can be identified with the port number. You must copy the CustomWebPart.dll file created in “Step 1: Create a custom Web Part” of this topic to the :\Inetpub\wwwroot\wss\VirtualDirectories\bin folder. For example, if the port number of your SharePoint portal is 13614, you must copy the CustomWebPart.dll file to the :\Inetpub\wwwroot\wss\VirtualDirectories\13614\bin folder.

    TipTip

    Another way to find the folder location of your SharePoint portal is by using the Internet Information Services (IIS) Manager window (Start > Run > inetmgr). Locate your SharePoint portal in the Internet Information Services (IIS) Manager window ([computer_name] > Web Sites > [Portal-Name]), right-click, and then click Properties in the shortcut menu. In the properties dialog box of the SharePoint portal, click the Home Directory tab, and then select the Local path box.

    • Add the Safe Control Entry in the web.config File: Because the CustomWebPart.dll file will be used on different computers and by multiple users, you must declare the file as “safe.” To do so, open the web.config file located in the SharePoint portal folder at :\Inetpub\wwwroot\wss\VirtualDirectories. Under the section of the web.config file, add the following safe control entry:

    ◦On 32-bit computer:

    Copy

     

    ◦On 64-bit computer:

    Copy

     

    Save the web.config file, and then close it.

    Step 3: Configure the SharePoint portal to use the custom Web Part

    You need to add the custom Web Part to the Microsoft Office SharePoint Server Web Part Gallery, so that you can use it on your SharePoint portal. To do so:

    1. Start SharePoint 3.0 Central Administration. Click Start, point to All Programs, point to Microsoft Office Server, and then click SharePoint 3.0 Central Administration.
  • In the left navigation pane, click the name of the Shared Service Provider (SSP) to which you want to add the custom Web Part.

  • On the Shared Services Administration page, in the upper-right corner, click Site Actions, and then click Create.

  • On the Site Settings page, click Web Parts under the Galleries column.

  • On the Web Part Gallery page, to add the custom Web Part to the gallery, click New. At this point the custom Web Part is not available in the Web Part Gallery page.

  • On the New Web Parts page, locate CustomWebPart (name of the custom Web Part) in the list, select the check box on the left, and then click Populate Gallery on the top of the page. This will add the CustomWebPart entry in the Web Part Gallery page.

  • Now you can use the custom Web Part (CustomWebPart) to create Web Parts in your SharePoint portal. The custom Web Part (CustomWebPart) will appear under the Miscellaneous section in the Add Web Parts page.

     

    Expand

    BizTalk Adapter for mySAP Business Suite and the WCF LOB Adapter SDK

    This topic has not yet been rated Rate this topic

    The Microsoft BizTalk Adapter for mySAP Business Suite implements a set of core components that leverage functionality provided by the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK and provide connectivity to the SAP system through the SAP Unicode RFC SDK Library (librfc32u.dll).

    The WCF LOB Adapter SDK serves as the software layer through which the SAP adapter interfaces with Windows Communication Foundation (WCF), and the RFC SDK serves as the layer through which the SAP adapter interfaces with the SAP system.

    The following figure shows the relationships between the internal components of the SAP adapter and between these components and the RFC SDK.

    The relationship of internal adapter components

    See

     

    SAP Weekend : Part 1 – ERPConnect Services for SharePoint 2010 (ECS)

    This weekend was spent completing my new “List Search Web Part” and also 2 Free Web Parts that is included in the “List Web Part Pack” – More about this in my future blog.

    erp256-bc0e84ce

    In between the “SAP Bug” bit me again and I decided to write a  blog post series on the various adapters I have used in SharePoint and SAP Integration Projects and to give you a basic “run down” of how and with which technologies each adapter connects the 2 systems with.

    ERPConnect was 1st on the list. ….

     

    Yes, I can hear the grumblings of those of us who have worked with SAP and SharePoint  Integration and the ERPConnect adapter before 🙂

    For starters, you need to have a SAP Developer Key to be allowed to use the SAP web service wizard, and also have the required SAP authorizations. In other cases it may not be allowed by IT operations to make any modification to the SAP environment, even if it’s limited to the full-automatic generation and activation of the BAPI webservice(s).

    Another reason from a system architecture viewpoint, is that the single BAPI and/or RFC calls may be of too low granularity. You actually want to perform a ‘business transaction’, consisting of multiple method invocations which must be treated as a Logical Unit of Work (LUW). SAP has introduced the concept of SAP Enterprise Services for this, and has delivered a first set of them. This is by far not complete yet, and SAP will augment it the coming years.

    SharePoint 2010 provides developer with the capability to integrate external data sources like SAP business data via the Business Connectivity Services (BCS) into the SharePoint system. The concept of BCS is based on entities and associated stereotyped operations. This perfectly suits for flat and simple structured data sets like SAP tables.

    Another and way more flexible option to use SAP data in SharePoint are the ERPConnect Services for SharePoint 2010 (ECS). The product suite consists of three product components: ERPConnect Services runtime, the BCS Connector application and the Xtract PPS for PerformancePoint Services.

    The runtime is providing a Service Application that integrates itself with the new service architecture of SharePoint 2010. The runtime offers a secure middle-tier layer to integrate different kind of SAP objects in your SharePoint applications, like tables and function modules.

    The BCS Connector application allows developers to create BDC models for the BCS Services, without programming knowledge. You may export the BDC models created by the BCS Connector to Visual Studio 2010 for further customizing.

     

    The Xtract PPS component offers a SAP data source provider for the PerformancePoint Services of SharePoint 2010. T

    his article gives you an overview of the ERPConnect Services runtime and shows how you can create and incorporate business data from SAP in different SharePoint application types, like Web Parts, Application Pages or Silverlight modules.

    This article does not introduce the other components.

    Background

    This section will give you a short explanation and background of SAP objects that can be used in ERPConnect Services. The most important objects are SAP tables and function modules. A function module is basically similar to a normal procedure in conventional programming languages. Function modules are written in ABAP, the SAP programming language, and are accessible from any other programs within a SAP system. They accept import and export parameters as well as other kind of special parameters.

     

    In addition, BAPIs (Business-API) are special function modules that are organized within the SAP Business Object Repository. In order to use function modules with the runtime they must be marked as Remote (RFC). SAP table data can also be retrieved. Tables in SAP are basically relational database tables. Others SAP objects like BW Cubes or SAP Queries can be accessed via the XtractQL query language (see below).

     

    Installation & Configuration

    Installing ERPConnect Services on a SharePoint 2010 server is done by an installer and is straight forward. The SharePoint Administration Service must run on the local server (see Windows Services).

    For more information see product documentation. After the installation has been successfully processed navigate to the Service Applications screen within the central administration (CA) of SharePoint:

     

    Before creating your first Service Application a Secure Store must be created, where ERPConnect Services will save SAP user credentials. In the settings page for the “Secure Store Service” create a new Target Application and name the application “ERPConnect Services”. Click on the button “Next” to define the store fields as follows:

     

    Finish the creation process by clicking on “Next” and define application administrators. Then, mark the application, click “Set Credentials” and enter the SAP user credentials:

     

    Let’s go on and create a new ERPConnect Service Application!

    Click the “ERPConnect Service Application” link in the “New” menu of the Service Applications page (see also first screenshot above). This opens a dialog to define the name of the service application, the SAP connection data and the IIS application pool:

     

    Click “Create” after entering all data and you will see the following entries in the Service Applications screen:

     

    That’s it! You are now done setting up your first ERPConnect Service Application.

    Development

    The runtime functionality covers different programming demands such as generically retrievable interface functions. The service applications are managed by the Central Administration of SharePoint. The following service and function areas are provided:

    1. Executing and retrieving data directly from SAP tables
    2. Executing SAP function modules / BAPIs
    3. Executing XtractQL query statements

    The next sections shows how to use these service and function areas and access different SAP objects from within your custom SharePoint applications using the ERPConnect Services. The runtime can be used in applications within the SharePoint context like Web Parts or Application Pages.

    In order to do so, you need to reference the assembly i in the project. Before you can access data from the SAP system you must create an instance of the ERPConnectServiceClient class. This is the gate to all SAP objects and the generic API of the runtime in overall. In the SharePoint context there are two options to create a client object instance:

    // Option #1
    ERPConnectServiceClient client = new ERPConnectServiceClient();
    
    // Option #2
    ERPConnectServiceApplicationProxy proxy = SPServiceContext.Current.GetDefaultProxy(
       typeof(ERPConnectServiceApplicationProxy)) as ERPConnectServiceApplicationProxy;
    ERPConnectServiceClient client = proxy.GetClient();

    For more details on using ECS in Silverlight or desktop applications see the specific sections below.

    Querying Tables

    Querying and retrieving table data is a common task for developers. The runtime allows retrieving data directly from SAP tables. The ERPConnectServiceClient class provides a method called ExecuteTableQuery with two overrides which query SAP tables in a simple way.

    The method also supports a way to pass miscellaneous parameters like row count and skip, custom function, where clause definition and a returning field list. These parameters can be defined by using the ExecuteTableQuerySettings class instance.

    DataTable dt = client.ExecuteTableQuery("T001");
    
    …
        
    ExecuteTableQuerySettings settings = new ExecuteTableQuerySettings {
      RowCount = 100,
      WhereClause = "ORT01 = 'Paris' AND LAND1 = 'FR'",
      Fields = new ERPCollection<string> { "BUKRS", "BUTXT", "ORT01", "LAND1" }
    };
    
    DataTable dt = client.ExecuteTableQuery("T001", settings);
    
    …
    
    // Sample 2
    DataTable dt = client.ExecuteTableQuery("MAKT",
                new ExecuteTableQuerySettings {
                    RowCount = 10,
                    WhereClause = "MATNR = '60-100C'",
                    OrderClause = "SPRAS DESC"
                });

    The first query reads all records from the SAP table T001 where the fields ORT01 equals Paris and LAND1 equals FR (France). The query returns the top 100 records and the result set contains only the fields BUKRS, BUTXT, ORT01 and LAND1.

    The second query returns the top ten records of the SAP table MAKT, where the field MATNR equals the material number 60-100C. The result set is ordered by the field SPRAS.

    Executing Function Modules

    In addition to query SAP tables the runtime API executes SAP function modules (BAPIs). Function modules must be marked as remote-enabled modules (RFC) within SAP.

    The ERPConnectServiceClient class provides a method called CreateFunction to create a structure of metadata for the function module. The method returns an instance of the data structure ERPFunction. This object instance contains all parameters types (import, export, changing and tables) that can be used with function modules.

    In the sample below we call the function SD_RFC_CUSTOMER_GET and pass a name pattern (T*) for the export parameter with name NAME1. Then we call the Execute method on the ERPFunction instance. Once the method has been executed the data structure is updated. The function returns all customers in the table CUSTOMER_T.

    ERPFunction function = client.CreateFunction("SD_RFC_CUSTOMER_GET");
    function.Exports["NAME1"].ParamValue = "T*";
    function.Execute();
    
    foreach(ERPStructure row in function.Tables["CUSTOMER_T"])
      Console.WriteLine(row["NAME1"] + ", " + row["ORT01"]);

    The following code shows an additional sample. Before we can execute this function module we need to define a table with HR data as input parameter.

    The parameters you need and what values the function module is returning dependents on the implementation of the function module

    ERPFunction function = client.CreateFunction("BAPI_CATIMESHEETMGR_INSERT");
    function.Exports["PROFILE"].ParamValue = "TEST";
    function.Exports["TESTRUN"].ParamValue = "X";
    
    ERPTable records = function.Tables["CATSRECORDS_IN"];
    ERPStructure r1 = records.AddRow();
    r1["EMPLOYEENUMBER"] = "100096";
    r1["WORKDATE"] = "20110704";
    r1["ABS_ATT_TYPE"] = "0001";
    r1["CATSHOURS"] = (decimal)8.0;
    r1["UNIT"] = "H";
    
    function.Execute();
    
    ERPTable ret = function.Tables["RETURN"]; 
    
    foreach(var i in ret)
      Console.WriteLine("{0} - {1}", i["TYPE"], i["MESSAGE"]);

    Executing XtractQL Query Statements

    The ECS runtime is offering a SAP query language called XtractQL. The XtractQL query language, also known as XQL, consists of ABAP and SQL syntax elements. XtractQL allows querying SAP tables, BW-Cubes, SAP Queries and executing function modules.

    It’s possible to return metadata for the objects and MDX statements can also be executed with XQL. All XQL queries are returning a data table object as result set. In case of the execution of function modules the caller must define the returning table (see sample below – INTO @RETVAL). XQL is very useful in situations where you need to handle dynamic statements. The following list shows a

    SELECT TOP 5 * FROM T001W WHERE FABKL = 'US'

    This query selects the top 5 records of the SAP table T001W where the field FABKL equals the value US.

    SELECT * FROM MARA WITH-OPTIONS(CUSTOMFUNCTIONNAME = 'Z_XTRACT_IS_TABLE')

     

    SELECT MAKTX AS [ShortDesc], MANDT, SPRAS AS Language FROM MAKT

    This query selects all records of the SAP table MAKT. The result set will contains three fields named ShortDesc, MANDT and Language.

     

    EXECUTE FUNCTION 'SD_RFC_CUSTOMER_GET'
       EXPORTS KUNNR='0000003340'
       TABLES CUSTOMER_T INTO @RETVAL;

    This query executes the SAP function module SD_RFC_CUSTOMER_GET and returns as result the table CUSTOMER_T (defined as @RETVAL).

    DESCRIBE FUNCTION 'SD_RFC_CUSTOMER_GET' GET EXPORTS

    This query returns metadata about the export parameters of the

    SELECT TOP 30 LIPS-LFIMG, LIPS-MATNR, TEXT_LIKP_KUNNR AS CustomerID
       FROM QUERY 'S|ZTHEO02|ZLIKP'
       WHERE SP$00002 BT '0080011000'AND '0080011999'

    This statement executes the SAP Query “S|ZTHEO02|ZLIKP” (name includes the workspace, user group and the query name). As you can see XtractQL extends the SQL syntax with ABAP or SAP specific syntax elements. This way you can define fields using the LIPS-MATNR format and SAP-like where clauses like “SP$00002 BT ‘0080011000’AND ‘0080011999’”.

    ERPConnect Services provides a little helper tool, the XtractQL Explorer (see screenshot below), to learn more about the query language and to test XQL queries. You can use this tool independent of SharePoint, but you need access to a SAP system.

    To find out more about all XtractQL language syntax see the product manual.

    Silverlight And Desktop Applications

    So far all samples are using the assembly ERPConnectServices.Server.Common.dll as project reference and all code snippets shown run within the SharePoint context, e.g. Web Part.

    ERPConnect Services also provides client libraries for Silverlight and desktop applications:

    ERPConnectServices.Client.dll for Desktop applications
    ERPConnectServices.Client.Silverlight.dll for Silverlight applications

    You need to add the references depending what project you are implementing.

    In Silverlight the implementation and design pattern is a little bit more complicated, since all web services will be called in asynchronously. It’s also not possible to use the DataTable class. It’s just not implemented for Silverlight.

    The runtime provides a similar class called ERPDataTable, which is used in this cases by the API. The ERPConnectServiceClient class for Silverlight provides the method ExecuteTableQueryAsync and an event called ExecuteTableQueryCompleted as callback delegate.

    public event EventHandler<ExecuteTableQueryCompletedEventArgs> ExecuteTableQueryCompleted;
    
    public void ExecuteTableQueryAsync(string tableName)
    public void ExecuteTableQueryAsync(string tableName, ExecuteTableQuerySettings settings)

    The following code sample shows a simple query of the SAP table T001 within a Silverlight client.

    First of all, an instance of the ERPConnectServiceClient is created using the URI of the ERPConnectService.svc, then a delegate is defined to handle the complete callback. Next, the query is executed, defined with a RowCount equal 10 to only return the top 10 records in the result set.

    Once the result is returned the data set will be attached to a DataGrid control (see screenshot below) within the callback method.

     

    void OnGetTableDataButtonClick(object sender, RoutedEventArgs e)
    {
      ERPConnectServiceClient client = new ERPConnectServiceClient(
        new Uri("http://<SERVERNAME>/_vti_bin/ERPConnectService.svc"));
    
      client.ExecuteTableQueryCompleted += OnExecuteTableQueryCompleted;
      client.ExecuteTableQueryAsync("T001", 
        new ExecuteTableQuerySettings { RowCount = 150 });
    }
    
    void OnExecuteTableQueryCompleted(object sender, ExecuteTableQueryCompletedEventArgs e)
    {
      if(e.Error != null)
        MessageBox.Show(e.Error.Message);
      else
      {
        e.Table.View.GroupDescriptions.Add(new PropertyGroupDescription("ORT01"));
        TableGrid.ItemsSource = e.Table.View;
      }
    }

    The screenshot below shows the XAML of the Silverlight page:

    The final result can be seen below:

    ECS Designer

    ERPConnect Services includes a Visual Studio 2010 plugin, the ECS Designer, that allows developer to visually design SAP interfaces. It’s working similar to the LINQ to SAP Designer I have written about a while ago, see article at CodeProject: LINQ to SAP.

    The ECS Designer is not automatically installed once you install the product. You need to call the installation program manually. The setup adds a new project item type to Visual Studio 2010 with the file extension .ecs and is linking it with the designer. The needed references are added automatically after adding an ECS project item.

    The designer generates source code to integrate with the ERPConnect Services runtime after the project item is saved. The generated context class contains methods and sub-classes that represent the defined SAP objects (see screenshots below).

     

    Before you access the SAP system for the first time you will be asked to enter the connection data. You may also load the connection data from SharePoint system. The designer GUI is shown in the screenshots below:

     

    The screenshot above for instance shows the tables dialog. After clicking the Add (+) button in the main designer screen and searching a SAP table in the search dialog, the designer opens the tables dialog.

     

    In this dialog you can change the name of the generated class, the class modifier and all needed properties (fields) the final class should contain.

     

    To preview your selection press the Preview button. The next screenshot shows the automatically generated classes in the file named EC1.Designer.cs:

     

    Using the generated code is simple. The project type we are using for this sample is a standard console application, therefore the designer is referencing the ERPConnectServices.Client.dll for desktop applications.

    Since we are not within the SharePoint context, we have to define the URI of the SharePoint system by passing this value into the constructor of the ERPConnectServicesContext class.

    The designer has generated class MAKT and an access property MAKTList for the context class of the table MAKT. The type of this property MAKTList is ERPTableQuery<MAKT>, which is a LINQ queryable data type.

     

    This means you can use LINQ statements to define the underlying query. Internally, the ERPTableQuery<T> type will translate your LINQ query into call of ExecuteTableQuery.

     

    That’s it!

     

    Advanced Techniques

    There are situations when you have to use the exact same SAP connection while calling a series of function modules in order to receive the correct result. Let’s take the following code:

    ERPConnectServiceClient client = new ERPConnectServiceClient();
    
    using(client.BeginConnectionScope())
    {
      ERPFunction f = client.CreateFunction("BAPI_GOODSMVT_CREATE");
    
      ERPStructure s = f.Exports["GOODSMVT_HEADER"].ToStructure();
      s["PSTNG_DATE"] = "20110609"; // Posting Date in the Document
      s["PR_UNAME"] = "BAEURLE";    // UserName
      s["HEADER_TXT"] = "XXX";      // HeaderText
      s["DOC_DATE"] = "20110609";   // Document Date in Document
    
      f.Exports["GOODSMVT_CODE"].ToStructure()["GM_CODE"] = "01";
    
      ERPStructure r = f.Tables["GOODSMVT_ITEM"].AddRow();
      r["PLANT"] = "1000";          // Plant
      r["PO_NUMBER"] = "4500017210"; // Purchase Order Number
      r["PO_ITEM"] = "010";      // Item Number of Purchasing Document 
      r["ENTRY_QNT"] = 1;          // Quantity in Unit of Entry
      r["MOVE_TYPE"] = "101";        // Movement Type
      r["MVT_IND"] = "B";            // Movement Indicator
      r["STGE_LOC"] = "0001";        // Storage Location
    
      f.Execute();
    
      string matDocument = f.Imports["MATERIALDOCUMENT"].ParamValue as string;
      string matDocumentYear = f.Imports["MATDOCUMENTYEAR"].ParamValue as string;
    
      ERPTable ret = f.Tables["RETURN"]; //.ToADOTable();
    
      foreach(var i in ret)
        Console.WriteLine("{0} - {1}", i["TYPE"], i["MESSAGE"]);
    
      ERPFunction fCommit = client.CreateFunction("BAPI_TRANSACTION_COMMIT");
      fCommit.Exports["WAIT"].ParamValue = "X";
      fCommit.Execute();
    }

    In this sample we create a goods receipt for a goods movement with BAPI_GOODSMVT_CREATE. The final call to BAPI_TRANSACTION_COMMIT will only work, if the system under the hood is using the same connection object.

     

    The runtime is not providing direct access to the underlying SAP connection, but the library offers a mechanism called connection scoping. You may create a new connection scope with the client library and telling ECS to use the same SAP connection until you close the connection scope. Within the connection scope every library call will use the same SAP connection.

    In order to create a new connection scope you need to call the BeginConnectionScope method of the class ERPConnectServiceClient.

    The method returns an IDisposable object, which can be used in conjunction with the using statement of C# to end the connection scope.

    Alternatively, you may call the EndConnectionScope method. It’s also possible to use function modules with nested structures as parameters.

    This is a special construct of SAP. The goods receipt sample above is using a nested structure for the export parameter GOODSMVT_CODE. For more detailed information about nested structures and tables see the product documentation.

    Building solutions on Duet Enterprise – What does this mean exactly?

    Any kind of enterprise application can be built on top of Duet Enterprise, e.g. to support particular business activities for Product Lifecycle Management (PLM), Supply Chain Management (SCM), Customer Relationship Management (CRM), … etc .

    However the idea here is not to rebuild application interfaces that already exist. Instead through Duet Enterprise we provide a set of application building blocks from which new kinds of solutions can be composed. We expect that these new kinds of solutions will enable collaboration, communications, or content management in Office and SharePoint in extension to the core SAP business processes.

    The consumers of these solutions would be information workers who need access to SAP information and services as they perform their business activities, or who participate in business processes managed in SAP, but are not typically power users of the SAP interfaces.

    duet5

    The building blocks that we will provide through Duet include both functional building blocks (e.g. application components for SAP data models like Customer, Product, Employee, … etc.) , as well as infrastructural building blocks (e.g. for Reporting, Workflow, Collaboration, … etc.).

    These infrastructural building blocks are what Christian described as ‘Ready To Use’ capabilities in his blog posting earlier.

    So an example of a solution built on Duet Enterprise might be an extension to inventory management processes in SAP to allow ad-hoc collaboration and reporting in SharePoint around alerts and notifications raised from a warehouse or a factory floor.

    However while Duet Enterprise provides a new set of building blocks, it does not add a new set of tools. Solution builders can leverage existing skillsets, and use standard tools like Visual Studio, SharePoint Designer, and ABAP Workbench. They would use these tools to compose these building blocks into new kinds of solutions that support a particular type of business activity.

    This solution would typically combine ad-hoc collaboration in SharePoint with structured process in SAP, and bring in contextual business data and reports to support decision making for that activity.

    Let us take a specific example. In today’s enterprises there are typically virtual teams to service key customer accounts. A particular account v-team would include employees who report into different organizational units, e.g. Sales, Marketing, Support … etc., and who work out of different geographic locations.  While organizationally separate, these v-teams need a way to collaborate among themselves, and to manage their working activities.

    Out-of-the-box with Duet Enterprise we will ship a set of composites, one of which is the customer collaboration workspace for account teams. This will enable any person with access to account information in SAP, to browse the list of customer accounts within SharePoint, and for any account request access to a collaboration workspace.

    This is a SharePoint site that gets auto-provisioned by the Duet Foundation the first time that the workspace is requested by a member of the account v-team, using the packaged composite that ships with Duet Enterprise. The site comes pre-wired with connections to SAP information and services.

    For example the account v-team members can view and edit account related information that is managed by SAP, as well as view reports related to the account, and can take for granted that the Duet Foundation provides the integrated security and administrative support which is necessary to support such activities.

    Now consider the design time experience to support the flow described above. The composite for the customer collaboration workspace which we provide in Duet Enterprise is a collection of building blocks that have been assembled together in a particular way. This provides a starting point for further customization and solution building.

    For example, there is a set of web services that are exposed from within the Duet Enterprise SAP Add-on (e.g. Customer, Sales Contact, … etc.), there is a set of External Content Types provided for SharePoint (that map to the SAP web services), and there is a  SharePoint site definition which provides a template for the customer collaboration workspaces which are auto-provisioned for v-team members.

    In addition there are SharePoint Features for the ready to use capabilities (e.g. Related Reports) that can be stapled to the Site Definition. So a solution builder can customize the composite by extending the web services and the external content types, by adding new SharePoint web parts or Features if needed, and by customizing the site definition files for the workspace, or even creating entirely new site definitions.

    Finally the solution builder leverages hooks in the Duet Foundation to plug in these new or customized composites.