Category Archives: .Net 4.5

How To : Create a Re-Usable News Page Layout using Content Type in SharePoint 2013

Introduction

In a recent project I was asked to consult in, the team needed to create sub site/s for news or events.

Developing for re-usability in SharePoint is something I find is lacking quite a bit in Development teams.

Below I outline the solution I worked out for the project, that is also now a template that the Team can use in any similiar project.

I will explain not only how to do it step by step but also continue to make this page layout as the default page layout of a publishing sub site.

After that, make a content query in the root site to preview the news articles.

Finally, I will be using variation to create a similar publishing sub site in other languages.

  1. Step by step creation of News Page Layout using Content Type in SharePoint 2013.
  2. How to Create a publishing sub site for news and using variation to creating the same site to other languages finally making the previous page layout as the default page layout of the sub site.

Firstly

  1. Open Visual Studio 2013 and a create new project of type SharePoint Solutions…”SharePoint 2013 Empty Project”.
    Create new SharePoint 2013 empty project
  2. As we will deploy our solution as a farm solution in our local farm on our local machine.
    Note: Make sure that the site is a publishing site to be able to proceed.

    Deploy the SharePoint site as a farm solution
  3. Our solution will be as the picture blew and we will add three folders for “SiteColumns”, “ContentTypes” and “PageLayouts”.
    SharePoint solution items
  4. Start by adding a new item to “SiteColumns” folder.
    Adding new item to SharePoint solution
  5. After we adding a new site column and rename it, add the following columns as we need to make the news layout NewsTitle, NewsBody, NewsBrief, NewsDate and NewsImage.
    Adding new item of type site column to the solution

    Then add the below fields and you will note that I use Resources in the DisplayName and the Group.

     <Field
     ID="{9fd593c1-75d6-4c23-8ce1-4e5de0d97545}"
     Name="NewsTitle"
     DisplayName="$Resources:SPWorld_News,NewsTitle;"
     Type="Text"
     Required="TRUE"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>
     <Field
     ID="{fcd9f32e-e2e0-4d00-8793-cfd2abf8ef4d}"
     Name="NewsBrief"
     DisplayName="$Resources:SPWorld_News,NewsBrief;"
     Type="Note"
     Required="FALSE"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>
     <Field
     ID="{FF268335-35E7-4306-B60F-E3666E5DDC07}"
     Name="NewsBody"
     DisplayName="$Resources:SPWorld_News,NewsBody;"
     Type="HTML"
     Required="TRUE"
     RichText="TRUE"
     RichTextMode="FullHtml"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>
     <Field
     ID="{FCA0BBA0-870C-4D42-A34A-41A69749F963}"
     Name="NewsDate"
     DisplayName="$Resources:SPWorld_News,NewsDate;"
     Type="DateTime"
     Required="TRUE"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>
     <Field
     ID="{8218A8D9-912C-47E7-AAD2-12AA10B42BE3}"
     Name="NewsImage"
     DisplayName="$Resources:SPWorld_News,NewsImage;"
     Required="FALSE"
     Type="Image"
     RichText="TRUE"
     RichTextMode="ThemeHtml"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>

    After That

  6. Create Content Type, we will be adding new Content Type to the folder ContentTypes.
    Adding new item of type Content Type to SharePoint solution
  7. We must make sure to select the base of the content type “Page”.
    Specifying the base type of the content type
  8. Open the content type and add our new columns to it.
    Adding columns to the content type
  9. Open the elements file of the content type and make sure it will look like this code below.Note: We use Resources in the Name, Description and the group of the content type.
    <!-- Parent ContentType:
    Page (0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39) -->
     <ContentType
     ID="0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39007A5224C9C2804A46B028C4F78283A2CB"
     Name="$Resources:SPWorld_News,NewsContentType;"
     Group="$Resources:SPWorld_News,NewsGroup;"
     Description="$Resources:SPWorld_News,NewsContentTypeDesc;"
     Inherits="TRUE" Version="0">
     <FieldRefs>
     <FieldRef ID="{9fd593c1-75d6-4c23-8ce1-4e5de0d97545}"
     DisplayName="$Resources:SPWorld_News,NewsTitle;" Required="TRUE" Name="NewsTitle" />
     <FieldRef ID="{fcd9f32e-e2e0-4d00-8793-cfd2abf8ef4d}"
     DisplayName="$Resources:SPWorld_News,NewsBrief;" Required="FALSE" Name="NewsBrief" />
     <FieldRef ID="{FF268335-35E7-4306-B60F-E3666E5DDC07}"
     DisplayName="$Resources:SPWorld_News,NewsBody;" Required="TRUE" Name="NewsBody" />
     <FieldRef ID="{FCA0BBA0-870C-4D42-A34A-41A69749F963}"
     DisplayName="$Resources:SPWorld_News,NewsDate;" Required="TRUE" Name="NewsDate" />
     <FieldRef ID="{8218A8D9-912C-47E7-AAD2-12AA10B42BE3}"
     DisplayName="$Resources:SPWorld_News,NewsImage;" Required="FALSE" Name="NewsImage" />
     </FieldRefs>
     </ContentType>
  10. Add new Module to the PageLayouts folder. After that, we will find sample.txt file, then rename it “NewsPageLayout.aspx”.
    Adding new module to SharePoint solution.
  11. Add the code below to this “NewsPageLayout.aspx”.
     <%@ Page language="C#" Inherits="Microsoft.SharePoint.Publishing.PublishingLayoutPage,
    Microsoft.SharePoint.Publishing,Version=15.0.0.0,Culture=neutral,PublicKeyToken=71e9bce111e9429c" %>
     <%@ Register Tagprefix="SharePointWebControls" Namespace="Microsoft.SharePoint.WebControls"
     Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
     <%@ Register Tagprefix="WebPartPages" Namespace="Microsoft.SharePoint.WebPartPages"
     Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
     <%@ Register Tagprefix="PublishingWebControls" Namespace="Microsoft.SharePoint.Publishing.WebControls"
     Assembly="Microsoft.SharePoint.Publishing, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
     <%@ Register Tagprefix="PublishingNavigation" Namespace="Microsoft.SharePoint.Publishing.Navigation"
     Assembly="Microsoft.SharePoint.Publishing, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    
     <asp:Content ContentPlaceholderID="PlaceHolderPageTitle" runat="server">
     <SharePointWebControls:FieldValue id="FieldValue1" FieldName="Title" runat="server"/>
     </asp:Content>
     <asp:Content ContentPlaceholderID="PlaceHolderMain" runat="server">
    
     <H1><SharePointWebControls:TextField ID="NewsTitle"
     FieldName="9fd593c1-75d6-4c23-8ce1-4e5de0d97545" runat="server">
     </SharePointWebControls:TextField></H1>
     <p><PublishingWebControls:RichHtmlField ID="NewsBody"
     FieldName="FF268335-35E7-4306-B60F-E3666E5DDC07" runat="server">
     </PublishingWebControls:RichHtmlField></p>
     <p><SharePointWebControls:NoteField ID="NewsBrief"
     FieldName="fcd9f32e-e2e0-4d00-8793-cfd2abf8ef4d" runat="server">
     </SharePointWebControls:NoteField></p>
     <p><SharePointWebControls:DateTimeField ID="NewsDate"
     FieldName="FCA0BBA0-870C-4D42-A34A-41A69749F963" runat="server">
     </SharePointWebControls:DateTimeField></p>
     <p><PublishingWebControls:RichImageField ID="NewsImage"
     FieldName="8218A8D9-912C-47E7-AAD2-12AA10B42BE3" runat="server">
     </PublishingWebControls:RichImageField></p>
    
     </asp:Content>
  12. Add the following code to the elements file of the “NewsPageLayouts” module.
     <Module Name="NewsPageLayout" 
    Url="_catalogs/masterpage" List="116" >
     <File Path="NewsPageLayout\NewsPageLayout.aspx" Url="NewsPageLayout.aspx"
     Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 
     ReplaceContent="TRUE" Level="Published" >
     <Property Name="Title" Value="$Resources:SPWorld_News,NewsPageLayout;" />
     <Property Name="MasterPageDescription" Value="$Resources:SPWorld_News,NewsPageLayout;" />
     <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" />
     <Property Name="PublishingPreviewImage"
     Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;
     /Preview Images/WelcomeSplash.png, ~SiteCollection/_catalogs/masterpage/$Resources:
     core,Culture;/Preview Images/WelcomeSplash.png" />
     <Property Name="PublishingAssociatedContentType"
     Value=";#$Resources:SPWorld_News,NewsContentType;;
     #0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39007A5224C9C2804A46B028C4F78283A2CB;#">
     </Property>
     </File>
     </Module>
  13. Don’t forget to add the Resources folder, then add the resource file with the name “SPWorld_News.resx” as we used it in the previous steps and add the below keys to it.
    News                     News
    NewsBody                 News Body
    NewsBrief                News Brief
    NewsContentType          News Content Type
    NewsContentTypeDesc      News Content Type Desc.
    NewsDate                 News Date
    NewsGroup                News
    NewsImage                News Image
    NewsPageLayout           News Page Layout
    NewsTitle                News Title
  14. Finally, deploy the solution.
  15. The next steps will explain how we add the “news content type” to the page layout through SharePoint wizard. We will do these steps pragmatically in the next article.Note: We will do the steps from “A” to “D” pragmatically in the next article without the need to do it manually from SharePoint.

    1. Go to Site Contents then Pages , Library, Library SettingsOpening library setting of the page library
    2. Add the news content type to the page layout.
      Adding existing content type to the pages library
    3. Then
      Selecting the content type to add it to pages library
    4. Go to Pages Library, Files, New Document, select News Content Type.
      Adding new document of the news content type to pages library
    5. Write the page title.
      Creating new page of news content type to pages library.
    6. Open the page to edit it. Pages library contains new page of news content type.
    7. Now we can see the page Layout after we add the title, Body, Brief, date and image. Finally click Save the news.

How To : Implement Business Data Connectivity in SharePoint 2013

Business Data Connectivity

Business Connectivity Services is a centralized infrastructure in SharePoint 2013 and Office 2013 that supports integrated data solutions. With Business Connectivity Services, you can use SharePoint 2013 and Office 2013 clients as interfaces into data that doesn’t live in SharePoint 2013 itself. For example, this external data may be in a database and it is accessed by using the out-of-the-box Business Connectivity Services connector for that database.

DuetEnterpriseDesign[1]

Business Connectivity Services can also connect to data that is available through a web service, or data that is published as an OData source or many other types of external data. Business Connectivity Services does this through out-of-the box or custom connectors.

External Content Types in BCS

External content types are the core of BCS. They enable you to manage and reuse the metadata and behaviors of a business entity, such as Customer or Order, from a central location. They enable users to interact with that external data and process it in a more meaningful way.

For more information about using external content types in BCS, see External content types in SharePoint 2013.

How to Connect With SQL External Data Source

Open the SharePoint Designer 2013 and click on the open site icon:

Input the site URL which we need to open:

Enter your site credentials here:

Now we need to create the new external content type and here we have the options for changing the name of the content type and creating the connection for external data source:

And click on the hyperlink text “Click here to discover the external data source operations, now this window will open:

Click on the “Add Connection “button, we can create a new connection. Here we have the different options to select .NET Type, SQL Server, WCF Service.

Here we selected SQL server, now we need to provide the Server credentials:

Now, we can see all the tables and views from the database.

In this screen, we have the options for creating different types of operations against the database:

Click on the next button:

Parameters Configurations:

Options for Filter parameters Configuration:

Here we need to add new External List, Click on the “External List”:

Select the Site here and click ok button:

Enter the list name here and click ok button:

After that, refresh the SharePoint site, we can see the external list here and click on the list:

Here we have the error message “Access denied by Business Connectivity.”

Solution for this Error

SharePoint central admin, click on the Manage service application:

Click on the Business Data Connectivity Service:

Set the permission for this list:

Click ok after setting the permissions:

After that, refresh the site and hope this will work… but again, it has a problem. The error message like Login failed for user “NT AUTHORITY\ANONYMOUS LOGON”.

Solution for this Error

We need to edit the connection properties, the Authentication mode selects the value ‘BDC Identity’.

Then follow the below mentioned steps.

Open PowerShell and type the following lines:

$bdc = Get-SPServiceApplication | 
where {$_ -match “Business Data Connectivity Service”}
$bdc.RevertToSelfAllowed = $true
$bdc.Update();

Now it’s working fine.

And there is a chance for one more error like:

Database Connector has throttled the response.
The response from database contains more than '2000' rows. 
The maximum number of rows that can be read through Database Connector is '2000'. 
The limit can be changed via the 'Set-SPBusinessDataCatalogThrottleConfig' cmdlet

It’s because it depends on the number of recodes that exist in the table.

Solution for this Error

Follow the below steps:

Open PowerShell and type the following lines and execute:

$bcs = Get-SPServiceApplicationProxy | where{$_.GetType().FullName 
-eq (‘Microsoft.SharePoint.BusinessData.SharedService.’ + ‘BdcServiceApplicationProxy’)}
$BCSThrottle = Get-SPBusinessDataCatalogThrottleConfig -Scope database 
-ThrottleType items -ServiceApplicationProxy $bcs
Set-SPBusinessDataCatalogThrottleConfig -Identity $BCSThrottle -Maximum 1000000 -Default 20000

How To : Manually add common consent to your Office 365 APIs Preview app

Windows_Azure_Wallpaper_p754[1]office365logoorange_web[1]

 

Learn how to manually add Microsoft Azure Active Directory common consent to your ASP.NET application so that it can access secured services.

Prerelease content Prerelease content
The features and APIs documented in this article are in preview and are subject to change. Do not use them in production.

In this article, you’ll learn how to build a web application hosted on an Azure website that uses the OneDrive for Business API to access secured folders and files.

You can easily set up access to the OneDrive for Business using the Office 365 API Preview Tools for Visual Studio 2013. If you’re not using the tools, you’ll need to manually set up your app in your development environment, register your app with Microsoft Azure Active Directory, write code to handle tokens, and write the code to work with the OneDrive for Business resources. All these steps are described in this article.

Note Note
This article covers OneDrive for Business apps, but the same steps apply to apps that access any other secured resource.

Before you manually add common consent to your app, make sure that you have the following:

  • An Office 365 account. If you don’t have one, you can sign up for an Office 365 developer site.
  • Visual Studio 2012 or Visual Studio 2013.
    Note Note
    The Office 365 API Preview Tools for Visual Studio 2013, which simplify development, are available for Visual Studio 2013 only.
  • A test account to use in your application.

We also recommend that you familiarize yourself with the Authorization Code Grant Flow. This will help you understand the authentication process that takes place in the background between your application, Azure AD, and the Office 365 resource so that you can better troubleshoot as you develop.

If you’ve already created an account within your Azure tenancy, you can use that account. Otherwise, you will have to create a new organizational user account to use in this sample.

To create an organizational user account

  1. Go to https://manage.windowsazure.com/.
  2. Choose the Active Directory icon on the left side in the Azure portal.
  3. Choose Add a user.
  4. Fill in the user name.
  5. Move to the next screen.
  6. Create a user profile. To do this:
    1. Enter a first and last name.
    2. Enter a display name.
    3. Set the Role to Global Administrator.
    4. After you set the role you will be asked for an alternate email address. You can enter the email address that you used to create the subscription, or a different one.
  7. Move to the next screen.
  8. Choose create.
  9. A temporary password is generated. You will use this to sign in later. You will have to change it at that time.
  10. Choose the check mark to finish creating the organizational user account.

The next step is to create the actual app that contains the UI and code needed to work with the OneDrive for Business REST APIs to list the folders and files in the user’s OneDrive.

To create the Visual Studio project

  1. Open Visual Studio 2013 and create a new ASP.NET Web Application project. Name the application Get_Stats. Choose OK.
  2. Choose the MVC template and choose the Change Authentication button. Select the Organization Accounts option. This will display additional options for authentication.
  3. Choose Cloud – Single Organization.
  4. Specify the domain of your Azure AD tenancy.
  5. Set the Access Level to Single Sign On, Read directory data.

    Under More Options, you will see the App ID URI is set automatically.

  6. Choose OK to continue. This brings up a dialog box to authenticate.
    Note Note
    If you receive an invalid domain name error, you might need to implement a workaround by substituting a real domain name, such as *.onmicrosoft.com, from another Azure subscription that you have. When you complete the Visual Studio new project dialog box, Visual Studio creates a temporary app registration entry on the domain that you specify. You can delete that entry later.

    As part of the workaround, you need to adjust the web.config settings and manually register the web app in the correct Azure AD domain.

  7. Enter the credentials of the user you created earlier.
  8. Choose OK to finish creating the new project. Visual Studio will automatically register the new web app in the Azure AD tenant you specified.
  9. Run the Visual Studio project, and sign on using the test account you created earlier. After the project is running, you can verify that single-sign on is working because the test account user name is displayed in the upper right corner of the web app.
  1. Log on with your Azure account.
  2. In the left navigation, choose Active Directory. Your directory will be listed.
  3. Choose your directory.
  4. In the top navigation, choose Applications.
  5. On the Active Directory tab, choose Applications.
  6. Add a new application in your Office 365 domain (created at Office 365 sign up) by choosing the “ADD” icon at the bottom of the portal screen. This will bring up a dialog box to tell Azure about your application.
  7. Choose Add an application my organization is developing.
  8. For the name of the application, enter Get Stats. For the Type, leave Web application and/or Web API. Then choose the arrow to move to step 2.
  9. For the Sign-On URL, enter the localhost URL from your Get_Stats Visual Studio project. To find the URL:
    1. Open your project in Visual Studio.
    2. In Solution Explorer, choose the Get_Status project.
    3. From the Properties window, copy the SSL URL value.
    4. Enter an App ID URI. Because the ID must be unique, it’s a good idea to choose a name that is similar to the app name. For example, you can use your Sign-on URL with your app name, such as https://locahost:44044/Get_Stats.
    5. Choose the checkmark to finish adding the application. You will be notified that the application was added successfully.
  1. Copy the APP ID URI to the clipboard.
  2. In your Get_Stats Visual Studio project, open the web.config file.
  3. Locate the ida:Realm key and paste the APP ID URI for the value.
  4. Locate the ida:AudienceUri key and paste the same APP ID URI for the value.
  5. Locate the audienceUris element and paste the same APP ID URI for the add element’s value.
  6. Locate the wsFederation element, and paste the same APP ID URI for the realm.
  7. In the Azure Portal, copy the federation metadata document URL to the clipboard.
  8. In the web.config file, locate the ida:FederationMetadataLocation key, and paste the URL for the value.
  9. In the Azure Portal, choose the View Endpoints icon at the bottom.
  10. Copy the WS-Federation Sign-On Endpoint to the clipboard.
  11. In the web.config file, locate the wsFederation element and paste the endpoint value for the issuer.
  12. Save your changes and run the project. You will be prompted to sign on. Sign on by using the test account you created earlier. You should see your account user name displayed in the upper right corner of the web app.

Get an application key


Next, you need to generate a key that you can use to identify your application for access tokens.

To get an application key for your app

  1. In the Azure Portal, select the Get_Stats application in the directory.
  2. Choose the Configure command and then locate the keys section.
  3. In the Select duration drop-down box, choose 1 year.
  4. Choose Save.

    The key value is displayed.

    Note Note
    This is the only time that the key is displayed.
  5. In Visual Studio, open the Get_Stats project, and open the web.config file.
  6. Locate the ida:Password element, and paste the key value for the value. Now your project will always send the correct password when it is requested.
  7. Save all files.
Configure API permissions


You need to specify which web APIs your web app needs access to, and what level of access it needs. This determines what scopes and permissions are requested on the consent form for your web app that is displayed for users and admins.

To configure API permissions

  1. In the Azure Portal, select the Get Status application in the directory.
  2. From the top navigation, choose Configure. This displays all the configuration properties.
  3. At the bottom is a web apis section. Notice that your web app has already been granted access to Azure AD.
  4. Choose Office365 SharePoint Online API.
  5. Choose Delegated Permissions and select Read items in all site collections.
    Note Note
    The options activate when you move over them.
  6. Choose Save to save these changes. Your web app will now request these permissions.

    You can also manage permissions by using a manifest. You can download your manifest file by choosing Manage Manifest.

Add the GraphHelper project to your solution


The easiest way to call graph APIs in Azure AD is to use the Graph API Helper Library. The following instructions show how to include the GraphHelper project into your Get_Stats solution.

To configure the Graph API Helper Library

  1. Download the Azure AD Graph API Helper Library.
  2. Copy the C# folder from the Graph API Helper Library to your project folder (i.e. \Projects\Get_Stats\C#.)
  3. Open the Get_Stats solution in Visual Studio.
  4. In the Solution Explorer, choose the Get-Stats solution and choose Add Existing Project.
  5. Go to the C# folder you copied, and open the WindowsAzure.AD.Graph.2013_04_05 folder.
  6. Select the Microsoft.WindowsAzure.ActiveDirectory.GraphHelper project and choose Open.
  7. If you are prompted with a security warning about adding the project, choose OK to indicate that you trust the project.
  8. Choose the Get_Stats project References folder and then choose Add Reference.
  9. In the Reference Manager dialog box, select Extensions and then select the Microsoft.Data.OData version 5.6.0.0 assembly and the Microsoft.Data.Services.Client version 5.6.0.0 assembly.
  10. In the same Reference Manager dialog box, expand the Solution menu on the left, and then select the checkbox for the Microsoft.WindowsAzure.ActiveDirectory.GraphHelper.
  11. Choose OK to add the references to your project.
  12. Add the following using directives to the top of HomeController.cs.
    using Microsoft.WindowsAzure.ActiveDirectory;
    using Microsoft.WindowsAzure.ActiveDirectory.GraphHelper;
    using System.Data.Services.Client;
    
    
  13. Save all files.

Add code to manage tokens and requests


Because your web app accesses multiple workloads, you need to write some code to obtain tokens. It’s best to place this code in some helper methods that can be called when needed.Note that the Office 365 API Preview tools will handle all this coding for you.

Your custom code handles the following scenarios:

  • Obtaining an authentication code
  • Using the authentication code to obtain an access token and a multiple resource refresh token
  • Using the multiple resource refresh token to obtain a new access token for a new workload

To create code to manage tokens and requests

  1. Open your Visual Studio project for Get_Stats.
  2. Open the HomeController.cs file.
  3. Create a new method named Stats by using the following code.
    public ActionResult Stats()
    {
        var authorizationEndpoint = "https://login.windows.net/"; // The oauth2 endpoint.
        var resource = "https://graph.windows.net"; // Request access to the AD graph resource.
        var redirectURI = ""; // The URL where the authorization code is sent on redirect.
    
        // Create a request for an authorization code.
        string authorizationUrl = string.Format("{1}common/oauth2/authorize?&response_type=code&client_id={2}&resource={3}&redirect_uri={4}",
               authorizationEndpoint,
               ClaimsPrincipal.Current.FindFirst(TenantIdClaimType).Value,
               AppPrincipalId,
               resource,
               redirectURI);
    
    
  4. The Stats method constructs a request for an authorization code and sends the request to the Oauth2 endpoint. If successful, the redirect returns to the specified CatchCode URL. Next, create a method to handle the redirect to CatchCode.
    public ActionResult CatchCode(string code)
    {}
    
    
  5. Acquire the access token by using the app credentials and the authorization code. Use your project’s correct port number in the following code.
    //  Replace the following port with the correct port number from your own project.
        var appRedirect = "https://localhost:44307/Home/CatchCode";
    
    //  Create an authentication context.
        AuthenticationContext ac = new AuthenticationContext(string.Format("https://login.windows.net/{0}",
        ClaimsPrincipal.Current.FindFirst(TenantIdClaimType).Value));
    
    //  Create a client credential based on the application ID and secret.
    ClientCredential clcred = new ClientCredential(AppPrincipalId, AppKey);
    
    //  Use the authorization code to acquire an access token.
        var arAD = ac.AcquireTokenByAuthorizationCode(code, new Uri(appRedirect), clcred);
    
    
  6. Next use the access token to call the Graph API and get the list of users for the Office 365 tenant. Paste the list into the following code.
    //  Convert token to the ADToken so you can use it in the graphhelper project.
    
        AADJWTToken token = new AADJWTToken();
        token.AccessToken = arAD.AccessToken; 
    
    //  Initialize a graphService instance by using the token acquired in the previous step.
    
        Microsoft.WindowsAzure.ActiveDirectory.DirectoryDataService graphService = new DirectoryDataService("09f9ea02-9be8-4597-86b9-32935a17723e", token);
        graphService.BaseUri = new Uri("https://graph.windows.net/09f9ea02-9be8-4597-86b9-32935a17723e");
    
    //  Get the list of all users.
    
        var users = graphService.users;
        QueryOperationResponse<Microsoft.WindowsAzure.ActiveDirectory.User> response;
        response = users.Execute() as QueryOperationResponse<Microsoft.WindowsAzure.ActiveDirectory.User>;
        List<Microsoft.WindowsAzure.ActiveDirectory.User> userList = response.ToList();
        ViewBag.userList = userList; 
    
    
  7. Now you need to call Microsoft OneDrive for Business, and this requires a new access token. Verify that the current token is a multiple resource refresh token, and then use it to obtain a new token. Paste the token into the following code.
    //  You need a new access token for new workload. Check to determine whether you have the MRRT.
    
        if (arAD.IsMultipleResourceRefreshToken)
        {
            // This is an MRRT so use it to request an access token for SharePoint.
            AuthenticationResult arSP = ac.AcquireTokenByRefreshToken(arAD.RefreshToken, AppPrincipalId, clcred, "https://imgeeky.spo.com/");
        }
    
    
  8. Finally, call Microsoft OneDrive for Business to get a list of files in the Shared with Everyone folder. Paste the list into the following code and replace any placeholders with correct values.
    //  Now make a call to get a list of all files in a folder. 
    //  Replace placeholders in the following string with correct values for your domain and user name. 
        var skyGetAllFilesCommand = "https://YourO365Domain-my.spo.com/personal/YourUserName_YourO365domain_spo_com/_api/web/GetFolderByServerRelativeUrl('/personal/YourUserName_YourO365domain_spo_com/Documents/Shared%20with%20Everyone')/Files";
    
        HttpWebRequest request = (HttpWebRequest)WebRequest.Create(skyGetAllFilesCommand);
        request.Method = "GET";
    
        WebResponse wr = request.GetResponse();
    
        ViewBag.test = wr.ToString();
    
        return View(); 
    
    
  9. Create a view for the CatchCode method. In Solution Explorer, expand the Views folder and choose Home, and then choose Add View.
  10. Enter CatchCode as the name of the new view, and choose Add.
  11. Paste the following HTML to render the users and Microsoft OneDrive for Business response from the CatchCode method.
    @{
        ViewBag.Title = "CatchCode";
    }
    <h2>Users</h2>
    <ul id="users">
    
        @foreach (var user in ViewBag.userList)
        {
            <li>@user.displayName</li>
        }
    </ul>
    <h2>OneDrive for Business Response</h2>
    <p>@ViewBag.skyResponse</p>
    
    
  12. Build and run the solution. Verify that you get a list of users, and that you get an XML response from the OneDrive for Business method call. To change the XML response, add files to the OneDrive for Business Share with Everyone folder.

How to : From the Trenches – Use SharePoint to Implement an ALM in Your Orginisation

After my successful creation and implementation of an ALM for Business Connexion using the SharePoint Platform, I thought I’d share the lessons I have learned and show you step for step how you can implement your own ALM leveraging the power of the SharePoint Platform

slide5[1]

In this article,

  • An Overview : SharePoint Application Lifecycle Management:
  • Learn how to plan and manage Application Lifecycle Management (ALM) in Microsoft SharePoint 2010 projects by using Microsoft Visual Studio 2010 and Microsoft SharePoint Designer 2010.
  • Also learn what to consider when setting up team development environments,
  • Establishing upgrade management processes,
  • Creating a standard SharePoint development model.
  • Extending your SharePoint ALM to include other Departments like Java, Mobile, .Net and even SAP Development
Introduction to Application Lifecycle Management in SharePoint 2010

The Microsoft SharePoint 2010 development platform, which includes Microsoft SharePoint Foundation 2010 and Microsoft SharePoint Server 2010, contains many capabilities to help you develop, deploy, and update customizations and custom functionalities for your SharePoint sites. The activities that take advantage of these capabilities all fall under the category of Application Lifecycle Management (ALM).

Key considerations when establishing ALM processes include not only the development and testing practices that you use before the initial deployment of a single customization, but also the processes that you must implement to manage updates and integrate customizations and custom functionality on an existing farm.

This article discusses the capabilities and tools that you can use when implementing an ALM process on a SharePoint farm, and also specific concerns and things to consider when you create and hone your ALM process for SharePoint development.

This article assumes that each development team will develop a unique ALM process that fits its specific size and needs, so its guidance is necessarily broad. However, it also assumes that regardless of the size of your team and the specific nature of your custom solutions, you will need to address similar sets of concerns and use capabilities and tools that are common to all SharePoint developers.

The guidance in this article will help you as create a development model that exploits all the advantages of the SharePoint 2010 platform and addresses the needs of your organization.

SharePoint Application Lifecycle Management: An Overview

Although the specific details of your SharePoint 2010 ALM process will differ according the requirements of your organizations, most development teams will follow the same general set of steps. Figure 1 depicts an example ALM process for a midsize or large SharePoint 2010 deployment. Obviously, the process and required tasks depend on the project size.

Figure 1. Example ALM process
Example ALM process

The following are the specific steps in the process illustrated in Figure 1 (see corresponding callouts 1 through 10):

  1. Someone (for example, a project manager or lead developer) collects initial requirements and turns them into tasks.
  2. Developers use Microsoft Visual Studio Team Foundation Server 2010 or other tools to track the development progress and store custom source code.
  3. Because source code is stored in a centralized location, you can create automated builds for integration and unit testing purposes. You can also automate testing activities to increase the overall quality of the customizations.
  4. After custom solutions have successfully gone through acceptance testing, your development team can continue to the pre-production or quality assurance environment.
  5. The pre-production environment should resemble the production environment as much as possible. This often means that the pre-production environment has the same patch level and configurations as the production environment. The purpose of this environment is to ensure that your custom solutions will work in production.
  6. Occasionally, copy the production database to the pre-production environment, so that you can imitate the upgrade actions that will be performed in the production environment.
  7. After the customizations are verified in the pre-production environment, they are deployed either directly to production or to a production staging environment and then to production.
  8. After the customizations are deployed to production, they run against the production database.
  9. End users work in the production environment, and give feedback and ideas concerning the different functionalities. Issues and bugs are reported and tracked through established reporting and tracking processes.
  10. Feedback, bugs, and other issues in the production environment are turned into requirements, which are prioritized and turned into developer tasks. Figure 2 shows how multiple developer teams can work with and process bug reports and change requests that are received from end users of the production environment. The model in Figure 2 also shows how development teams might coordinate their solution packages. For example, the framework team and the functionality development team might follow separate versioning models that must be coordinated as they track bugs and changes.
    Figure 2. Change management involving multiple developer teams
    Change management involving multiple teams

Integrating Testing and Build Verification Environments into a SharePoint 2010 ALM Process

In larger projects, quality assurance (QA) personnel might use an additional build verification or user acceptance testing (UAT) farm to test and verify the builds in an environment that more closely resembles the production environment.

Typically, a build verification farm has multiple servers to ensure that custom solutions are deployed correctly. Figure 3 shows a potential model for relating development integration and testing environments, build verification farms, and production environments. In this particular model, the pre-production or QA farm and the production farm switch places after each release. This model minimizes any downtime that is related to maintaining the environments.

Figure 3. Model for relating development integration and testing environments
Model for relating development environments

Integrating SharePoint Designer 2010 into a SharePoint 2010 ALM Process

Another significant consideration in your ALM model is Microsoft SharePoint Designer 2010. SharePoint 2010 is an excellent platform for no-code solutions, which can be created and then deployed directly to the production environment by using SharePoint Designer 2010. These customizations are stored in the content database and are not stored in your source code repository.

General designer activities and how they interact with development activities are another consideration. Will you be creating page layouts directly within your production environment, or will you deploy them as part of your packaged solutions? There are advantages and disadvantages to both options.

Your specific ALM model depends completely on the custom solutions and the customizations that you plan to make, and on your own policies. Your ALM process does not have to be as complex as the one described in this section. However, you must establish a firm ALM model early in the process as you plan and create your development environment and before you start creating your custom solutions.

Next, we discuss specific tools and capabilities that are related to SharePoint 2010 development that you can use when considering how to create a model for SharePoint ALM that will work best for your development team.

Solution Packages and SharePoint Development Tools

One major advantage of the SharePoint 2010 development platform is that it provides the ability to save sites as solution packages. A solution package is a deployable, reusable package stored in a CAB file with a .wsp extension. You can create a solution package either by using the SharePoint 2010 user interface (UI) in the browser, SharePoint Designer 2010, or Microsoft Visual Studio 2010. In the browser and SharePoint Designer 2010 UIs, solution packages are also called templates. This flexibility enables you to create and design site structures in a browser or in SharePoint Designer 2010, and then import these customizations into Visual Studio 2010 for more development. Figure 4 shows this process.

Figure 4. Flow through the SharePoint development tools
Flow through the SharePoint development tools

When the customizations are completed, you can deploy your solution package to SharePoint for use. After modifying the existing site structure by using a browser, you can start the cycle again by saving the updated site as a solution package.

This interaction among the tools also enables you to use other tools. For example, you can design a workflow process in Microsoft Visio 2010 and then import it to SharePoint Designer 2010 and from there to Visual Studio 2010. For instructions on how to design and import a workflow process, see Create, Import, and Export SharePoint Workflows in Visio.

For more information about creating solution packages in SharePoint Designer 2010, see Save a SharePoint Site as a Template. For more information about creating solution packages in Visual Studio 2010, see Creating SharePoint Solution Packages.

Using SharePoint Designer 2010 as a Development Tool

SharePoint Designer 2010 differs from Microsoft Office SharePoint Designer 2007 in that its orientation has shifted from the page to features and functionality. The improved UI provides greater flexibility for creating and designing different functionalities. It provides rich tooling for building complete, reusable, and process-centric applications. For more information about the new capabilities and features of SharePoint Designer 2010, see Getting Started with SharePoint Designer.

You can also use SharePoint Designer 2010 to modify modular components developed with Visual Studio 2010. For example, you can create Web Parts and other controls in Visual Studio 2010, deploy them to a SharePoint farm, and then edit them in SharePoint Designer 2010.

The primary target users for SharePoint Designer 2010 are IT personnel and information workers who can use this application to create customizations in a production environment. For this reason, you must decide on an ALM model for your particular environment that defines which kinds of customizations will follow the complete ALM development process and which customizations can be done by using SharePoint Designer 2010. Developers are secondary target users. They can use SharePoint Designer 2010 as a part of their development activities, especially during initial creation of customization packages and also for rapid development and prototyping. Your ALM process must also define where and how to fit SharePoint Designer 2010 into the broader development model.

A key challenge of using SharePoint Designer 2010 is that when you use it to modify files, all of your changes are stored in the content database instead of in the file system. For example, if you customize a master page for a specific site by using SharePoint Designer 2010 and then design and deploy new branding elements inside a solution package, the changes are not available for the site that has the customized master page, because that site is using the version of the master page that is stored in the content database.

To minimize challenges such as these, SharePoint Designer 2010 contains new features that enable you to control usage of SharePoint Designer 2010 in a specific environment. You can apply these control settings at the web application level or site collection level. If you disable some action at the web application level, that setting cannot be changed at the site collection level.

SharePoint Designer 2010 makes the following settings available:

  • Allow site to be opened in SharePoint Designer 2010.
  • Allow customization of files.
  • Allow customization of master pages and layout pages.
  • Allow site collection administrators to see the site URL structure.

Because the primary purpose of SharePoint Designer 2010 is to customize content on an existing site, it does not support source code control. By default, pages that you customize by using SharePoint Designer 2010 are stored inside a versioned SharePoint library. This provides you with simple support for versioning, but not for full-featured source code control.

Ads by CeheuapMMeAd Options
Importing Solution Packages into Visual Studio 2010

When you save a site as a solution package in the browser (from the Save as Template page in Site Settings), SharePoint 2010 stores the site as a solution package (.wsp) file and places it in the Solution Gallery of that site collection. You can then download the solution package from the Solution Gallery and import it into Visual Studio 2010 by using the Import SharePoint Solution Package template, as shown in Figure 5.

Figure 5. Import SharePoint Solution Package template
Import SharePoint Solution Package template

SharePoint 2010 solution packages contain many improvements that take advantage of new capabilities that are available in its feature framework. The following list contains some of the new feature elements that can help you manage your development projects and upgrades.

  • SourceVersion for WebFeature and SiteFeature
  • WebTemplate feature element
  • PropertyBag feature element
  • $ListId:Lists
  • WorkflowAssociation feature element
  • CustomSchema attribute on ListInstance
  • Solution dependencies

After you import your project, you can start customizing it any way you like.

Note Note
Because this capability is based on the WebTemplate feature element, which is based on a corresponding site definition, the resulting solution package will contain definitions for everything within the site. For more information about creating and using web templates, see Web Templates.

Visual Studio 2010 supports source code control (as shown in Figure 6), so that you can store the source code for your customizations in a safer and more secure central location, and enable easy sharing of customizations among developers.

Figure 6. Visual Studio 2010 source code control
Visual Studio 2010 source code control

The specific way in which your developers access this source code and interact with each other depends on the structure of your team development environment. The next section of this article discusses key concerns and considerations that you should consider when you build a team development environment for SharePoint 2010.

Team Development Environment for SharePoint 2010: An Overview

As in any ALM planning process, your SharePoint 2010 planning should include the following steps:

  1. Identify and create a process for initiating projects.
  2. Identify and implement a versioning system for your source code and other deployed resources.
  3. Plan and implement version control policies.
  4. Identify and create a process for work item and defect tracking and reporting.
  5. Write documentation for your requirements and plans.
  6. Identify and create a process for automated builds and continuous integration.
  7. Standardize your development model for repeatability.

Microsoft Visual Studio Team Foundation Server 2010 (shown in Figure 7) provides a good potential platform for many of these elements of your ALM model.

Figure 7. Visual Studio 2010 Team Foundation Server
Visual Studio 2010 Team Foundation Server

When you have established your model for team development, you must choose either a collection of tools or Microsoft Visual Studio Team Foundation Server 2010 to manage your development. Microsoft Visual Studio Team Foundation Server 2010 provides direct integration into Visual Studio 2010, and it can be used to manage your development process efficiently. It provides many capabilities, but how you use it will depend on your projects.

You can use the Microsoft Visual Studio Team Foundation Server 2010 for the following activities:

  • Tracking work items and reporting the progress of your development. Microsoft Visual Studio Team Foundation Server 2010 provides tools to create and modify work items that are delivered not only from Visual Studio 2010, but also from the Visual Studio 2010 web client.
  • Storing all source code for your custom solutions.
  • Logging bugs and defects.
  • Creating, executing, and managing your testing with comprehensive testing capabilities.
  • Enabling continuous integration of your code by using the automated build capabilities.

Microsoft Visual Studio Team Foundation Server 2010 also provides a basic installation option that installs all required functionalities for source control and automated builds. These are typically the most used capabilities of Microsoft Visual Studio Team Foundation Server 2010, and this option helps you set up your development environment more easily.

Setting Up a Team Development Environment for SharePoint 2010

SharePoint 2010 must be installed on a development computer to take full advantage of its development capabilities. If you are developing only remote applications, such as solutions that use SharePoint web services, the client object model, or REST, you could potentially develop solutions on a computer where SharePoint 2010 is not installed. However, even in this case, your developers’ productivity would suffer, because they would not be able to take advantage of the full debugging experience that comes with having SharePoint 2010 installed directly on the development computer.

The design of your development environment depends on the size and needs of your development team. Your choice of operating system also has a significant impact on the overall design of your team development process. You have three main options for creating your development environments, as follows:

  1. You can run SharePoint 2010 directly on your computer’s client operating system. This option is available only when you use the 64-bit version of Windows 7, Windows Vista Service Pack 1, or Windows Vista Service Pack 2.
  2. You can use the boot to Virtual Hard Drive (VHD) option, which means that you start your laptop by using the operating system in VHD. This option is available only when you use Windows 7 as your primary operating system.
  3. You can use virtualization capabilities. If you choose this option, you have a choice of many options. But from an operational viewpoint, the option that is most likely the easiest to implement is a centralized virtualized environment that hosts each developer’s individual development environment.

The following sections take a closer look at these three options.

SharePoint 2010 on a Client Operating System

If you are using the 64-bit version of Windows 7, Windows Vista Service Pack 1, or Windows Vista Service Pack 2, you can install SharePoint Foundation 2010 or SharePoint Server 2010. For more information about installing SharePoint 2010 on supported operating systems, see Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008.

Figure 8 shows how a computer that is running a client operating system would operate within a team development environment.

Figure 8. Computer running a client operating system in a team development environment
Computer running a client operating system

A benefit of this approach is that you can take full advantage of any of your existing hardware that is running one of the targeted client operating systems. You can also take advantage of pre-existing configurations, domains, and enterprise resources that your enterprise supports. This could mean that you would require little or no additional IT support. Your developers would also face no delays (such as booting up a virtual machine or connecting to an environment remotely) in accessing their development environments.

However, if you take this approach, you must ensure that your developers have access to sufficient hardware resources. In any development environment, you should use a computer that has an x64-capable CPU, and at least 2 gigabytes (GB) of RAM to install and run SharePoint Foundation 2010; 4 GB of RAM is preferable for good performance. You should use a computer that has 6 GB to 8 GB of RAM to install and run SharePoint Server 2010.

A disadvantage of this approach is that your environments will not be centrally managed, and it will be difficult to keep all of your project-dependent environmental requirements in sync. It might also be advisable to write batch files that start and stop some of the SharePoint-related services so that when your developers are not working with SharePoint 2010, these services will not consume resources and degrade the performance of their computers.

The lack of centralized maintenance could hurt developer productivity in other ways. For example, this might be an unwieldy approach if your team is working on a large Microsoft SharePoint Online project that is developing custom solutions for multiple services (for example, the equivalents of http://intranet, http://mysite, http://teams, http://secure, http://search, http://partners, and http://www.internet.com) and deploying these solutions in multiple countries or regions.

If you are developing on a computer that is running a client operating system in a corporate domain, each development computer would have its own name (and each local domain name would be different, such as http://dev 1 or http://dev2). If each developer is implementing custom functionalities for multiple services, you must use different port numbers to differentiate each service (for example, http://dev1 for http://intranet and http://dev1:81 for http://mysite). If all of your developers are using the same Visual Studio 2010 projects, the project debugging URL must be changed manually whenever a developer takes the latest version of a project from your source code repository.

This would create a manual step that could hurt developer productivity, and it would also diminish the efficiency of any scripts that you have written for setting up development environments, because the individual environments are not standardized. Some form of centralization with virtualization is preferable for large enterprise development projects.

SharePoint 2010 on Windows 7 and Booting to Virtual Hard Drive

If you are using Windows 7, you can also create a VHD out of an existing Windows Server 2008 image on which SharePoint 2010 is installed in Windows Hyper-V, and then configure Windows 7 with BDCEdit.exe so that it boots directly to the operating system on the VHD. To learn more about this kind of configuration, see Deploy Windows on a Virtual Hard Disk with Native Boot and Boot from VHD in Win 7.

Figure 9 shows how a computer that is running Windows 7 and booting to VHD would operate within a team development environment.

Figure 9. Windows 7 and booting to VHD in a team environment
Windows 7 and booting to VHD in a team environment

An advantage of this approach is the flexibility of having multiple dedicated environments for an individual project, enabling you to isolate each development environment. Your developers will not accidentally cross-reference any artifacts within their projects, and they can create project-dependent environments.

However, this option has considerable hardware requirements, because you are using the available hardware and resources directly on your computers.

SharePoint 2010 in Centralized Virtualized Environments

In a centralized virtualized environment, you host your development environments in one centralized location, and developers access these environments through remote connections. This means that you use Windows Hyper-V in the centralized location and copy a VHD for every developer as needed. Each VHD is configured to be available from the corporate network, so that when it starts, it can be accessed by using remote connections.

Figure 10 shows how a centralized virtualized team development environment would operate.

Figure 10. Centralized virtualized team development environment
Centralized virtualized development environment

An advantage of this approach is that the hardware requirements for individual developer computers are relatively few because the actual work happens in a centralized environment. Developers could even use computers with 1 GB of RAM as their clients and then connect remotely to the centralized location. You can also manage environments easily from one centralized location, making adjustments to them whenever necessary.

Your centralized host will have significantly high hardware requirements, but developers can easily start and stop these environments. This enables you to use the hardware that you have allocated for your development environments more efficiently. Additionally, this approach provides a ready platform for more extensive testing environments for your custom code (such as multi-server farms).

After you set up your team development environment, you can start taking advantage of the deployment and upgrade capabilities that are included with the new solution packaging model in SharePoint 2010. The following sections describe how to take advantage of these new capabilities in your ALM model.

Models for Solution Lifecycle Management in SharePoint 2010

The SharePoint 2010 solution packaging model provides many useful features that will help you plan for deploying custom solutions and managing the upgrade process. You can implement assembly versioning by applying binding redirects in your web application configuration file. You can also apply versioning to your feature upgrades, and feature upgrade actions enable you to manage changes that will be necessary on your existing sites to accommodate feature upgrades. These upgrade actions can be handled declaratively or programmatically.

The feature upgrade query object model enables you to create queries in your code that look for features on your existing sites that can be upgraded. You can use this object model to obtain relevant information about all of the features and feature versions that are deployed on your SharePoint 2010 sites. In your solution manifest file, you can also configure the type of Internet Information Services (IIS) recycling to perform during a solution upgrade.

The following sections go into greater details about these capabilities and how you can use them.

Using Assembly BindingRedirect with SharePoint 2010 Assemblies

The BindingRedirect feature element can be added to your web applications configuration file. It enables you to redirect from earlier versions of installed assemblies to newer versions. Figure 11 shows how the XML configuration from the solution manifest file instructs SharePoint to add binding redirection rules to the web application configuration file. These rules forward any reference to version 1.0 of the assembly to version 2.0. This is required in your solution manifest file if you are upgrading a custom solution that uses assembly versioning and if there are existing instances of the solution and the assembly on your sites.

Figure 11. Binding redirection rules in a solution manifest file
Binding redirection rules in a solution manifest

It is a best practice to use assembly versioning, because it gives you an easy way to track the versions of a solution that are deployed to your production environments.

SharePoint 2010 Feature Versioning

The support for feature versioning in SharePoint 2010 provides many capabilities that you can use when you are upgrading features. For example, you can use the SPFeature.Version property to determine which versions of a feature are deployed on your farm, and therefore which features must be upgraded. For a code sample that demonstrates how to do this, see Version.

Feature versioning in SharePoint 2010 also enables you to define a value for the SPFeatureDependency.MinimumVersion property to handle feature dependencies. For example, you can use the MinimumVersion property to ensure that a particular version of a dependent feature is activated. Feature dependencies can be added or removed in each new version of a feature.

The SharePoint 2010 feature framework has also enhanced the object model level to support feature versioning more easily. You can use the QueryFeatures method to retrieve a list of features, and you can specify both feature version and whether a feature requires an upgrade. The QueryFeatures method returns an instance of SPFeatureQueryResultCollection, which you can use to access all of the features that must be updated. This method is available from multiple scopes, because it is available from the SPWebService, SPWebApplication, SPContentDatabase, and SPSite classes. For more information about this overloaded method, see QueryFeatures(), QueryFeatures(), QueryFeatures(), and QueryFeatures(). For an overview of the feature upgrade object model, see Feature Upgrade Object Model.

The following section summarizes many of the new upgrade actions that you can apply when you are upgrading from one version of a feature to another.

SharePoint 2010 Feature Upgrade Actions

Upgrade actions are defined in the Feature.xml file. The SPFeatureReceiver class contains a FeatureUpgrading method, which you can use to define actions to perform during an upgrade. This method is called during feature upgrade when the feature’s Feature.xml file contains one or more <CustomUpgradeAction> tags, as shown in the following example.

<UpgradeActions>
  <CustomUpgradeAction Name="text">
    ...
  </CustomUpgradeAction>
</UpgradeActions>

Each custom upgrade action has a name, which can be used to differentiate the code that must be executed in the feature receiver. As shown in following example, you can parameterize custom action instances.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <!-- First action-->
      <CustomUpgradeAction Name="example">
        <Parameters>
          <Parameter Name="parameter1">Whatever</Parameter>
          <Parameter Name="anotherparameter">Something meaningful</Parameter>
          <Parameter Name="thirdparameter">additional configurations</Parameter>
        </Parameters>
      </CustomUpgradeAction>
      <!-- Second action-->
      <CustomUpgradeAction Name="SecondAction">
        <Parameters>
          <Parameter Name="SomeParameter1">Value</Parameter>
          <Parameter Name="SomeParameter2">Value2</Parameter>
          <Parameter Name="SomeParameter3">Value3</Parameter>
        </Parameters>
      </CustomUpgradeAction>
    </VersionRange>
  </UpgradeActions>
</Feature>

This example contains two CustomUpgradeAction elements, one named example and the other named SecondAction. Both elements have different parameters, which are dependent on the code that you wrote for the FeatureUpgrading event receiver. The following example shows how you can use these upgrade actions and their parameters in your code.

 <summary>
 Called when feature instance is upgraded for each of the custom upgrade actions in the Feature.xml file.
 </summary>
 <param name="properties">Feature receiver properties</param>
 <param name="upgradeActionName">Upgrade action name</param>
 <param name="parameters">Custom upgrade action parameters</param>

public override  FeatureUpgrading(SPFeatureReceiverProperties properties, 
                                        string upgradeActionName, 
                                        System.Collections.Generic.IDictionary<string, string> parameters)
{

    // Do not do anything, if feature scope is not correct.
     (properties.Feature.Parent  SPWeb)
    {

        // Log that feature scope is incorrect.
        return;
    }

    switch (upgradeActionName)
    {
         "example":
            FeatureUpgradeManager.UpgradeAction1(parameters["parameter1"], parameters["AnotherParameter"],
                                                 parameters["ThirdParameter"]);
            break;
         "SecondAction":
            FeatureUpgradeManager.UpgradeAction1(parameters["SomeParameter1"], parameters["SomeParameter2"],
                                                 parameters["SomeParameter3"]);
            break;
        default:

            // Log that code for action does not exist.
            break;
    }
}

You can have as many upgrade actions as you want, and you can apply them to version ranges. The following example shows how you can apply upgrade actions to version ranges of a feature.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange BeginVersion="1.0.0.0" EndVersion ="2.0.0.0">
      ...
    </VersionRange>
    <VersionRange BeginVersion="2.0.0.1" EndVersion="3.0.0.0">
      ...
    </VersionRange>
    <VersionRange BeginVersion="3.0.0.1" EndVersion="4.0.0.0">
      ...
    </VersionRange>
  </UpgradeActions>
</Feature>

The AddContentTypeField upgrade action can be used to define additional fields for an existing content type. It also provides the option of pushing these changes down to child instances, which is often the preferred behavior. When you initially deploy a content type to a site collection, a definition for it is created at the site collection level. If that content type is used in any subsite or list, a child instance of the content type is created. To ensure that every instance of the specific content type is updated, you must set the PushDown attribute to , as shown in the following example.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <AddContentTypeField ContentTypeId="0x0101002b0e208ace0a4b7e83e706b19f32cab9"
                           FieldId="{ccbcd479-94c9-4f3a-95c4-58897da434fe}"
                           PushDown="True"/>
    </VersionRange>
  </UpgradeActions>
</Feature>

For more information about working with content types programmatically, see Introduction to Content Types.

The ApplyElementManifests upgrade action can be used to apply new artifacts to a SharePoint 2010 site without reactivating features. Just as you can add new elements to any new SharePoint elements.xml file, you can instruct SharePoint to apply content from a specific elements file to sites where a given feature is activated.

You can use this upgrade action if you are upgrading an existing feature whose FeatureActivating event receiver performs actions that you do not want to execute again on sites where the feature is deployed. The following example demonstrates how to include this upgrade action in a Feature.xml file.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <ApplyElementManifests>
        <ElementManifest Location="AdditionalV2Fields\Elements.xml"/>
      </ApplyElementManifests>
    </VersionRange>
  </UpgradeActions>
</Feature>

An example of a use case for this upgrade action involves adding new .webpart files to a feature in a site collection. You can use the ApplyElementManifest upgrade action to add those files without reactivating the feature. Another example would involve page layouts, which contain initial Web Part instances that are defined in the file element structure of the feature element file. If you reactivate this feature, you will get duplicates of these Web Parts on each of the page layouts. In this case, you can use the ElementManifest element of the ApplyElementManifests upgrade action to add new page layouts to a site collection that uses the feature without reactivating the feature.

The MapFile element enables you to map a URL request to an alternative URL. The following example demonstrates how to include this upgrade action in a Feature.xml file.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <MapFile FromPath="Features\MapPathDemo_MapPathDemo\PageDeployment\MyExamplePage.aspx"
             ToPath="Features\MapPathDemo_MapPathDemo\PageDeployment\MyExamplePage2.aspx" />
  </UpgradeActions>
</Feature>

Mapping URLs in this way would be useful to you in a case where you have to deploy a new version of a page that was customized by using SharePoint Designer 2010. The resulting customized page would be served from the content database. When you deploy the new version of the page, the new version will not appear because content for that page is coming from the database and not from the file system. You could work around this problem by using the MapFile element to redirect requests for the old version of the page to the newer version.

It is important to understand that the FeatureUpgrading method is called for each feature instance that will be updated. If you have 10 sites in your site collection and you update a web-scoped feature, the feature receiver will be called 10 times for each site context. For more information about how to use these new declarative feature elements, see Feature.xml Changes.

Upgrading SharePoint 2010 Features: A High-Level Walkthrough

This section describes at a high level how you can put these feature-versioning and upgrading capabilities to work. When you create a new version of a feature that is already deployed on a large SharePoint 2010 farm, you must consider two different scenarios: what happens when the feature is activated on a new site and what happens on sites where the feature already exists. When you add new content to the feature, you must first update all of the existing definitions and include instructions for upgrading the feature where it is already deployed.

For example, perhaps you have developed a content type to which you must add a custom site column named City. You do this in the following way:

  1. Add a new element file to the feature. This element file defines the new site column and modifies the Feature.xml file to include the element file.
  2. Update the existing definition of the content type in the existing feature element file. This update will apply to all sites where the feature is newly deployed and activated.
  3. Define the required upgrade actions for the existing sites. In this case, you must ensure that the newly added element file for the additional site column is deployed and that the new site column is associated with the existing content types. To achieve these two objectives, you add the ApplyElementManifests and the AddContentTypeField upgrade actions to your Feature.xml file.

When you deploy the new version of the feature to existing sites and upgrade it, the upgrade actions are applied to sites one by one. If you have defined custom upgrade actions, the FeatureUpgrading method will be called as many times as there are instances of the feature activated in your site collection or farm.

Figure 12 shows how the different components of this scenario work together when you perform the upgrade.

Figure 12. Components of a feature upgrade that adds a new element to an existing feature
Add a new element to an existing feature

Different sites might have different versions of a feature deployed on them. In this case, you can create version ranges, which define specific actions to perform when you are upgrading from one version to another. If a version range is not defined, all upgrade actions will be applied during each upgrade.

Figure 13 shows how different upgrade actions can be applied to version ranges.

Figure 13. Applying different upgrade actions to version ranges.
Applying upgrade actions to version ranges

In this example, if a given site is upgrading directly from version 1.0 to version 3.0, all configurations will be applied because you have defined specific actions for upgrading from version 1.0 to version 2.0 and from 2.0 to version 3.0. You have also defined actions that will be applied regardless of feature version.

Code Design Guidelines for Upgrading SharePoint 2010 Features

To provide more flexibility for your code, you should not place your upgrade code directly inside the FeatureUpgrading event receiver. Instead, put the code in some centralized location and refer to it inside the event receiver, as shown in Figure 14.

Figure 14. Centralized feature upgrade manager
Centralized feature upgrade manager

By placing your upgrade code inside a centralized utility class, you increase both the reusability and the testability of your code, because you can perform the same actions in multiple locations. You should also try to design your custom upgrade actions as generically as possible, using parameters to make them applicable to specific upgrade scenarios.

Solution Lifecycles: Upgrading SharePoint 2010 Solutions

If you are upgrading a farm (full-trust) solution, you must first deploy the new version of your solution package to a farm.

Execute either of the following scripts from a command prompt to deploy updates to a SharePoint farm. The first example uses the Stsadm.exe command-line tool.

stsadm -o upgradesolution -name solution.wsp -filename solution.wsp

The second example uses the Update-SPSolution Windows PowerShell cmdlet.

UpdateSPSolution Identity contoso_solution.wsp LiteralPath c:\contoso_solution_v2.wsp GACDeployment

After the new version is deployed, you can perform the actual upgrade, which executes the upgrade actions that you defined in your Feature.xml files.

A farm solution upgrade can be performed either farm-wide or at a more granular level by using the object model. A farm-wide upgrade is performed by using the Psconfig command-line tool, as shown in the following example.

psconfig -cmd upgrade -inplace b2b
NoteNote
This tool causes a service break on the existing sites. During the upgrade, all feature instances throughout the farm for which newer versions are available will be upgraded.

You can also perform upgrades for individual features at the site level by using the Upgrade method of the SPFeature class. This method causes no service break on your farm, but you are responsible for managing the version upgrade from your code. For a code example that demonstrates how to use this method, see SPFeature.Upgrade.

Upgrading a sandboxed solution at the site collection level is much more straightforward. Just upload the SharePoint solution package (.wsp file) that contains the upgraded features. If you have a previous version of a sandboxed solution in your solution gallery and you upload a newer version, an Upgrade option appears in the UI, as shown in Figure 15.

Figure 15. Upgrading a sandboxed solution
Upgrading a sandboxed solution

After you select the Upgrade option and the upgrade starts, all features in the sandboxed solution are upgraded.

Conclusion

This article has discussed some considerations and examples of Application Lifecycle Management (ALM) design that are specific to SharePoint 2010, and it has also enumerated and described the most important capabilities and tools that you can integrate into the ALM processes that you choose to establish in your enterprise. The SharePoint 2010 feature framework and solution packaging model provide flexibility and power that you can put to work in your ALM processes.

How To : SharePoint Cross-site Publishing and Free code for Web Part

Cross-site publishing is one of the powerful new capabilities in SharePoint 2013.  It enables the separation of data entry from display and breaks down the container barriers that have traditionally existed in SharePoint (ex: rolling up information across site collections). 

 cross-site-publishing

Cross-site publishing is delivered through search and a number of new features, including list/library catalogs, catalog connections, and the content search web part.  Unfortunately, SharePoint Online/Office 365 doesn’t currently support these features.  Until they are added to the service (possibly in a quarterly update), customers will be looking for alternatives to close the gap.  In this post, I will outline several alternatives for delivering cross-site and search-driven content in SharePoint Online and how to template these views for reuse

I’m a huge proponent of SharePoint Online.  After visiting several Microsoft data centers, I feel confident that Microsoft is better positioned to run SharePoint infrastructure than almost any organization in the world.  SharePoint Online has very close feature parity to SharePoint on-premise, with the primary gaps existing in cross-site publishing and advanced business intelligence.  Although these capabilities have acceptable alternatives in the cloud (as will be outlined in this post), organizations looking to maximize the cloud might consider SharePoint running in IaaS for immediate access to these features.

 

Apps for SharePoint

The new SharePoint app model is fully supported in SharePoint Online and can be used to deliver customizations to SharePoint using any web technology.  New SharePoint APIs can be used with the app model to deliver an experience similar to cross-site publishing.  In fact, the content search web part could be re-written for delivery through the app model as an “App Part” for SharePoint Online. 
Although the app model provides great flexibility and reuse, it does come with some drawbacks.  Because an app part is delivered through a glorified IFRAME, it would be challenging to navigate to a new page from within the app part.  A link within the app would only navigate within the IFRAME (not the parent of the IFRAME).  Secondly, there isn’t a great mechanism for templating a site to automatically leverage an app part on its page(s).  Apps do not work with site templates, so a site that contains an app cannot be saved as a template.  Apps can be “stapled” to sites, but the app installed event (which would be needed to add the app part to a page) only fires when the app is installed into the app catalog.

REST APIs and Script Editor

The script editor web part is a powerful new tool that can help deliver flexible customization into SharePoint Online.  The script editor web part allows a block of client-side script to be added to any wiki or web part page in a site.  Combined with the new SharePoint REST APIs, the script editor web part can deliver mash-ups very similar to cross-site publishing and the content search web part.  Unlike apps for SharePoint, the script editor isn’t constrained by IFRAME containers, app permissions, or templating limitations.  In fact, a well-configured script editor web part could be exported and re-imported into the web part gallery for reuse.

Cross-site publishing leverages “catalogs” for precise querying of specific content.  Any List/Library can be designated as a catalog.  By making this designation, SharePoint will automatically create managed properties for columns of the List/Library and ultimately generate a search result source in sites that consume the catalog.  Although SharePoint Online doesn’t support catalogs, it support the building blocks such as managed properties and result sources.  These can be manually configured to provide the same precise querying in SharePoint Online and exploited in the script editor web part for display.

Calling Search REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “search/query?Querytext=’ContentType:News'”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

An easier approach might be to directly reference a list/library in the REST call of our client-side script.  This wouldn’t require manual search configuration and would provide real-time publishing (no waiting for new items to get indexed).  You could think of this approach similar to a content by query web part across site collections (possibly even farms) and the REST API makes it all possible!

List REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

The content search web part uses display templates to render search results in different arrangements (ex: list with images, image carousel, etc).  There are two types of display templates the content search web part leverages…the control template, which renders the container around the items, and the item template, which renders each individual item in the search results.  This is very similar to the way a Repeater control works in ASP.NET.  Display templates are authored using HTML, but are converted to client-side script automatically by SharePoint for rendering.  I mention this because our approach is very similar…we will leverage a container and then loop through and render items in script.  In fact, all the examples in this post were converted from display templates in a public site I’m working on. 

Item display template for content search web part

<!–#_
var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId() + “_ImageTitle_”);
var rem = index % 3;
var even = true;
if (rem == 1)
    even = false;

var pictureURL = $getItemValue(ctx, “Picture URL”);
var pictureId = encodedId + “picture”;
var pictureMarkup = Srch.ContentBySearch.getPictureMarkup(pictureURL, 140, 90, ctx.CurrentItem, “mtcImg140”, line1, pictureId);
var pictureLinkId = encodedId + “pictureLink”;
var pictureContainerId = encodedId + “pictureContainer”;
var dataContainerId = encodedId + “dataContainer”;
var dataContainerOverlayId = encodedId + “dataContainerOverlay”;
var line1LinkId = encodedId + “line1Link”;
var line1Id = encodedId + “line1”;
 _#–>
<div style=”width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;”>
   <a href=”_#= linkURL =#_”>
      <div style=”float: left; width: 140px; padding-right: 10px;”>
         <img src=”_#= pictureURL =#_” class=”mtcImg140″ style=”width: 140px;” />
      </div>
      <div style=”float: left; width: 170px”>
         <div class=”mtcProfileHeader mtcProfileHeaderP”>_#= line1 =#_</div>
      </div>
   </a>
</div>

 

Script equivalent

<div id=”divUnfeaturedNews”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //get the details for each item
                var listData = data.d.results;
                var itemCount = listData.length;
                var processedCount = 0;
                var ul = $(“<ul style=’list-style-type: none; padding-left: 0px;’ class=’cbs-List’>”);
                for (i = 0; i < listData.length; i++) {
                    $.ajax({
                        url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
                        type: “GET”,
                        headers: { “Accept”: “application/json;odata=verbose” },
                        success: function (data) {
                            processedCount++;
                            var htmlStr = “<li style=’display: inline;’><div style=’width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;’>”;
                            htmlStr += “<a href=’#’>”;
                            htmlStr += “<div style=’float: left; width: 140px; padding-right: 10px;’>”;
                            htmlStr += setImageWidth(data.d.PublishingRollupImage, ‘140’);
                            htmlStr += “</div>”;
                            htmlStr += “<div style=’float: left; width: 170px’>”;
                            htmlStr += “<div class=’mtcProfileHeader mtcProfileHeaderP’>” + data.d.Title + “</div>”;
                            htmlStr += “</div></a></div></li>”;
                            ul.append($(htmlStr))
                            if (processedCount == itemCount) {
                                $(“#divUnfeaturedNews”).append(ul);
                            }
                        },
                        error: function (data) {
                            alert(data.statusText);
                        }
                    });
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });

    function setImageWidth(imgString, width) {
        var img = $(imgString);
        img.css(‘width’, width);
        return img[0].outerHTML;
    }
</script>

 

Even one of the more complex carousel views from my site took less than 30min to convert to the script editor approach.

Advanced carousel script

<div id=”divFeaturedNews”>
    <div class=”mtc-Slideshow” id=”divSlideShow” style=”width: 610px;”>
        <div style=”width: 100%; float: left;”>
            <div id=”divSlideShowSection”>
                <div style=”width: 100%;”>
                    <div class=”mtc-SlideshowItems” id=”divSlideShowSectionContainer” style=”width: 610px; height: 275px; float: left; border-style: none; overflow: hidden; position: relative;”>
                        <div id=”divFeaturedNewsItemContainer”>
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 1&$top=4”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                var listData = data.d.results;
                for (i = 0; i < listData.length; i++) {
                    getItemDetails(listData, i, listData.length);
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });
    var processCount = 0;
    function getItemDetails(listData, i, count) {
        $.ajax({
            url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                processCount++;
                var itemHtml = “<div class=’mtcItems’ id=’divPic_” + i + “‘ style=’width: 610px; height: 275px; float: left; position: absolute; border-bottom: 1px dotted #ababab; z-index: 1; left: 0px;’>”
                itemHtml += “<div id=’container_” + i + “‘ style=’width: 610px; height: 275px; float: left;’>”;
                itemHtml += “<a href=’#’ title='” + data.d.Caption_x005f_x0020_x005f_Title + “‘ style=’width: 610px; height: 275px;’>”;
                itemHtml += data.d.Feature_x005f_x0020_x005f_Image;
                itemHtml += “</a></div></div>”;
                itemHtml += “<div class=’titleContainerClass’ id=’divTitle_” + i + “‘ data-originalidx='” + i + “‘ data-currentidx='” + i + “‘ style=’height: 25px; z-index: 2; position: absolute; background-color: rgba(255, 255, 255, 0.8); cursor: pointer; padding-right: 10px; margin: 0px; padding-left: 10px; margin-top: 4px; color: #000; font-size: 18px;’ onclick=’changeSlide(this);’>”;
                itemHtml += data.d.Caption_x005f_x0020_x005f_Title;
                itemHtml += “<span id=’currentSpan_” + i + “‘ style=’display: none; font-size: 16px;’>” + data.d.Caption_x005f_x0020_x005f_Body + “</span></div>”;
                $(‘#divFeaturedNewsItemContainer’).append(itemHtml);

                if (processCount == count) {
                    allItemsLoaded();
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    }
    window.mtc_init = function (controlDiv) {
        var slideItems = controlDiv.children;
        for (var i = 0; i < slideItems.length; i++) {
            if (i > 0) {
                slideItems[i].style.left = ‘610px’;
            }
        };
    };

    function allItemsLoaded() {
        var slideshows = document.querySelectorAll(“.mtc-SlideshowItems”);
        for (var i = 0; i < slideshows.length; i++) {
            mtc_init(slideshows[i].children[0]);
        }

        var div = $(‘#divTitle_0’);
        cssTitle(div, true);
        var top = 160;
        for (i = 1; i < 4; i++) {
            var divx = $(‘#divTitle_’ + i);
            cssTitle(divx, false);
            divx.css(‘top’, top);
            top += 35;
        }
    }

 

bottlenecks[1]

 

    function cssTitle(div, selected) {
        if (selected) {
            div.css(‘height’, ‘auto’);
            div.css(‘width’, ‘300px’);
            div.css(‘top’, ’10px’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’26px’);
            div.css(‘padding-top’, ‘5px’);
            div.css(‘padding-bottom’, ‘5px’);
            div.find(‘span’).css(‘display’, ‘block’);
        }
        else {
            div.css(‘height’, ’25px’);
            div.css(‘width’, ‘auto’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’18px’);
            div.css(‘padding-top’, ‘0px’);
            div.css(‘padding-bottom’, ‘0px’);
            div.find(‘span’).css(‘display’, ‘none’);
        }
    }

    window.changeSlide = function (item) {
        //get all title containers
        var listItems = document.querySelectorAll(‘.titleContainerClass’);
        var currentIndexVals = { 0: null, 1: null, 2: null, 3: null };
        var newIndexVals = { 0: null, 1: null, 2: null, 3: null };

        for (var i = 0; i < listItems.length; i++) {
            //current Index
            currentIndexVals[i] = parseInt(listItems[i].getAttribute(‘data-currentidx’));
        }

        var selectedIndex = 0; //selected Index will always be 0
        var leftOffset = ”;
        var originalSelectedIndex = ”;

        var nextSelected = ”;
        var originalNextIndex = ”;

        if (item == null) {
            var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
            originalSelectedIndex = parseInt(item0.getAttribute(‘data-originalidx’));
            originalNextIndex = originalSelectedIndex + 1;
            nextSelected = currentIndexVals[0] + 1;
        }
        else {
            nextSelected = item.getAttribute(‘data-currentidx’);
            originalNextIndex = item.getAttribute(‘data-originalidx’);
        }

        if (nextSelected == 0) { return; }

        for (i = 0; i < listItems.length; i++) {
            if (currentIndexVals[i] == selectedIndex) {
                //this is the selected item, so move to bottom and animate
                var div = $(‘[data-currentidx=”0″]’);
                cssTitle(div, false);
                div.css(‘left’, ‘-400px’);
                div.css(‘top’, ‘230px’);

                newIndexVals[i] = 3;
                var item0 = document.querySelector(‘[data-currentidx=”0″]’);
                originalSelectedIndex = item0.getAttribute(‘data-originalidx’);

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else if (currentIndexVals[i] == nextSelected) {
                //this is the NEW selected item, so resize and slide in as selected
                var div = $(‘[data-currentidx=”‘ + nextSelected + ‘”]’);
                cssTitle(div, true);
                div.css(‘left’, ‘-610px’);

                newIndexVals[i] = 0;

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else {
                //move up in queue
                var curIdx = currentIndexVals[i];
                var div = $(‘[data-currentidx=”‘ + curIdx + ‘”]’);

                var topStr = div.css(‘top’);
                var topInt = parseInt(topStr.substring(0, topStr.length – 1));

                if (curIdx != 1 && nextSelected == 1 || curIdx > nextSelected) {
                    topInt = topInt – 35;
                    if (curIdx – 1 == 2) { newIndexVals[i] = 2 };
                    if (curIdx – 1 == 1) { newIndexVals[i] = 1 };
                }

                //move up
                div.animate(
                    { top: topInt }, 500, function () {
                    });
            }
        };

        if (originalNextIndex < 0)
            originalNextIndex = itemCount – 1;

        //adjust pictures
        $(‘#divPic_’ + originalNextIndex).css(‘left’, ‘610px’);
        leftOffset = ‘-610px’;

        $(‘#divPic_’ + originalSelectedIndex).animate(
            { left: leftOffset }, 500, function () {
            });

        $(‘#divPic_’ + originalNextIndex).animate(
            { left: ‘0px’ }, 500, function () {
            });

        var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
        var item1 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[1] + ‘”]’);
        var item2 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[2] + ‘”]’);
        var item3 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[3] + ‘”]’);
        if (newIndexVals[0] != null) { item0.setAttribute(‘data-currentidx’, newIndexVals[0]) };
        if (newIndexVals[1] != null) { item1.setAttribute(‘data-currentidx’, newIndexVals[1]) };
        if (newIndexVals[2] != null) { item2.setAttribute(‘data-currentidx’, newIndexVals[2]) };
        if (newIndexVals[3] != null) { item3.setAttribute(‘data-currentidx’, newIndexVals[3]) };
    };
</script>

 

End-result of script editors in SharePoint Online

Separate authoring site collection

Final Thoughts

Free Code to Create Cross-site Publishing Apps for SharePoint Online

Cross-site publishing is one of the powerful new capabilities in SharePoint 2013.  It enables the separation of data entry from display and breaks down the container barriers that have traditionally existed in SharePoint (ex: rolling up information across site collections). 

 IC648720[1]

Cross-site publishing is delivered through search and a number of new features, including list/library catalogs, catalog connections, and the content search web part.  Unfortunately, SharePoint Online/Office 365 doesn’t currently support these features.  Until they are added to the service (possibly in a quarterly update), customers will be looking for alternatives to close the gap.  In this post, I will outline several alternatives for delivering cross-site and search-driven content in SharePoint Online and how to template these views for reuse

I’m a huge proponent of SharePoint Online.  After visiting several Microsoft data centers, I feel confident that Microsoft is better positioned to run SharePoint infrastructure than almost any organization in the world.  SharePoint Online has very close feature parity to SharePoint on-premise, with the primary gaps existing in cross-site publishing and advanced business intelligence.  Although these capabilities have acceptable alternatives in the cloud (as will be outlined in this post), organizations looking to maximize the cloud might consider SharePoint running in IaaS for immediate access to these features.

 

Apps for SharePoint

The new SharePoint app model is fully supported in SharePoint Online and can be used to deliver customizations to SharePoint using any web technology.  New SharePoint APIs can be used with the app model to deliver an experience similar to cross-site publishing.  In fact, the content search web part could be re-written for delivery through the app model as an “App Part” for SharePoint Online. 
Although the app model provides great flexibility and reuse, it does come with some drawbacks.  Because an app part is delivered through a glorified IFRAME, it would be challenging to navigate to a new page from within the app part.  A link within the app would only navigate within the IFRAME (not the parent of the IFRAME).  Secondly, there isn’t a great mechanism for templating a site to automatically leverage an app part on its page(s).  Apps do not work with site templates, so a site that contains an app cannot be saved as a template.  Apps can be “stapled” to sites, but the app installed event (which would be needed to add the app part to a page) only fires when the app is installed into the app catalog.

REST APIs and Script Editor

The script editor web part is a powerful new tool that can help deliver flexible customization into SharePoint Online.  The script editor web part allows a block of client-side script to be added to any wiki or web part page in a site.  Combined with the new SharePoint REST APIs, the script editor web part can deliver mash-ups very similar to cross-site publishing and the content search web part.  Unlike apps for SharePoint, the script editor isn’t constrained by IFRAME containers, app permissions, or templating limitations.  In fact, a well-configured script editor web part could be exported and re-imported into the web part gallery for reuse.

Cross-site publishing leverages “catalogs” for precise querying of specific content.  Any List/Library can be designated as a catalog.  By making this designation, SharePoint will automatically create managed properties for columns of the List/Library and ultimately generate a search result source in sites that consume the catalog.  Although SharePoint Online doesn’t support catalogs, it support the building blocks such as managed properties and result sources.  These can be manually configured to provide the same precise querying in SharePoint Online and exploited in the script editor web part for display.

Calling Search REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “search/query?Querytext=’ContentType:News'”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

An easier approach might be to directly reference a list/library in the REST call of our client-side script.  This wouldn’t require manual search configuration and would provide real-time publishing (no waiting for new items to get indexed).  You could think of this approach similar to a content by query web part across site collections (possibly even farms) and the REST API makes it all possible!

List REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

The content search web part uses display templates to render search results in different arrangements (ex: list with images, image carousel, etc).  There are two types of display templates the content search web part leverages…the control template, which renders the container around the items, and the item template, which renders each individual item in the search results.  This is very similar to the way a Repeater control works in ASP.NET.  Display templates are authored using HTML, but are converted to client-side script automatically by SharePoint for rendering.  I mention this because our approach is very similar…we will leverage a container and then loop through and render items in script.  In fact, all the examples in this post were converted from display templates in a public site I’m working on. 

Item display template for content search web part

<!–#_
var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId() + “_ImageTitle_”);
var rem = index % 3;
var even = true;
if (rem == 1)
    even = false;

var pictureURL = $getItemValue(ctx, “Picture URL”);
var pictureId = encodedId + “picture”;
var pictureMarkup = Srch.ContentBySearch.getPictureMarkup(pictureURL, 140, 90, ctx.CurrentItem, “mtcImg140”, line1, pictureId);
var pictureLinkId = encodedId + “pictureLink”;
var pictureContainerId = encodedId + “pictureContainer”;
var dataContainerId = encodedId + “dataContainer”;
var dataContainerOverlayId = encodedId + “dataContainerOverlay”;
var line1LinkId = encodedId + “line1Link”;
var line1Id = encodedId + “line1”;
 _#–>
<div style=”width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;”>
   <a href=”_#= linkURL =#_”>
      <div style=”float: left; width: 140px; padding-right: 10px;”>
         <img src=”_#= pictureURL =#_” class=”mtcImg140″ style=”width: 140px;” />
      </div>
      <div style=”float: left; width: 170px”>
         <div class=”mtcProfileHeader mtcProfileHeaderP”>_#= line1 =#_</div>
      </div>
   </a>
</div>

 

Script equivalent

<div id=”divUnfeaturedNews”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //get the details for each item
                var listData = data.d.results;
                var itemCount = listData.length;
                var processedCount = 0;
                var ul = $(“<ul style=’list-style-type: none; padding-left: 0px;’ class=’cbs-List’>”);
                for (i = 0; i < listData.length; i++) {
                    $.ajax({
                        url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
                        type: “GET”,
                        headers: { “Accept”: “application/json;odata=verbose” },
                        success: function (data) {
                            processedCount++;
                            var htmlStr = “<li style=’display: inline;’><div style=’width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;’>”;
                            htmlStr += “<a href=’#’>”;
                            htmlStr += “<div style=’float: left; width: 140px; padding-right: 10px;’>”;
                            htmlStr += setImageWidth(data.d.PublishingRollupImage, ‘140’);
                            htmlStr += “</div>”;
                            htmlStr += “<div style=’float: left; width: 170px’>”;
                            htmlStr += “<div class=’mtcProfileHeader mtcProfileHeaderP’>” + data.d.Title + “</div>”;
                            htmlStr += “</div></a></div></li>”;
                            ul.append($(htmlStr))
                            if (processedCount == itemCount) {
                                $(“#divUnfeaturedNews”).append(ul);
                            }
                        },
                        error: function (data) {
                            alert(data.statusText);
                        }
                    });
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });

    function setImageWidth(imgString, width) {
        var img = $(imgString);
        img.css(‘width’, width);
        return img[0].outerHTML;
    }
</script>

 

Even one of the more complex carousel views from my site took less than 30min to convert to the script editor approach.

Advanced carousel script

<div id=”divFeaturedNews”>
    <div class=”mtc-Slideshow” id=”divSlideShow” style=”width: 610px;”>
        <div style=”width: 100%; float: left;”>
            <div id=”divSlideShowSection”>
                <div style=”width: 100%;”>
                    <div class=”mtc-SlideshowItems” id=”divSlideShowSectionContainer” style=”width: 610px; height: 275px; float: left; border-style: none; overflow: hidden; position: relative;”>
                        <div id=”divFeaturedNewsItemContainer”>
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 1&$top=4”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                var listData = data.d.results;
                for (i = 0; i < listData.length; i++) {
                    getItemDetails(listData, i, listData.length);
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });
    var processCount = 0;
    function getItemDetails(listData, i, count) {
        $.ajax({
            url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                processCount++;
                var itemHtml = “<div class=’mtcItems’ id=’divPic_” + i + “‘ style=’width: 610px; height: 275px; float: left; position: absolute; border-bottom: 1px dotted #ababab; z-index: 1; left: 0px;’>”
                itemHtml += “<div id=’container_” + i + “‘ style=’width: 610px; height: 275px; float: left;’>”;
                itemHtml += “<a href=’#’ title='” + data.d.Caption_x005f_x0020_x005f_Title + “‘ style=’width: 610px; height: 275px;’>”;
                itemHtml += data.d.Feature_x005f_x0020_x005f_Image;
                itemHtml += “</a></div></div>”;
                itemHtml += “<div class=’titleContainerClass’ id=’divTitle_” + i + “‘ data-originalidx='” + i + “‘ data-currentidx='” + i + “‘ style=’height: 25px; z-index: 2; position: absolute; background-color: rgba(255, 255, 255, 0.8); cursor: pointer; padding-right: 10px; margin: 0px; padding-left: 10px; margin-top: 4px; color: #000; font-size: 18px;’ onclick=’changeSlide(this);’>”;
                itemHtml += data.d.Caption_x005f_x0020_x005f_Title;
                itemHtml += “<span id=’currentSpan_” + i + “‘ style=’display: none; font-size: 16px;’>” + data.d.Caption_x005f_x0020_x005f_Body + “</span></div>”;
                $(‘#divFeaturedNewsItemContainer’).append(itemHtml);

                if (processCount == count) {
                    allItemsLoaded();
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    }
    window.mtc_init = function (controlDiv) {
        var slideItems = controlDiv.children;
        for (var i = 0; i < slideItems.length; i++) {
            if (i > 0) {
                slideItems[i].style.left = ‘610px’;
            }
        };
    };

    function allItemsLoaded() {
        var slideshows = document.querySelectorAll(“.mtc-SlideshowItems”);
        for (var i = 0; i < slideshows.length; i++) {
            mtc_init(slideshows[i].children[0]);
        }

        var div = $(‘#divTitle_0’);
        cssTitle(div, true);
        var top = 160;
        for (i = 1; i < 4; i++) {
            var divx = $(‘#divTitle_’ + i);
            cssTitle(divx, false);
            divx.css(‘top’, top);
            top += 35;
        }
    }

    function cssTitle(div, selected) {
        if (selected) {
            div.css(‘height’, ‘auto’);
            div.css(‘width’, ‘300px’);
            div.css(‘top’, ’10px’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’26px’);
            div.css(‘padding-top’, ‘5px’);
            div.css(‘padding-bottom’, ‘5px’);
            div.find(‘span’).css(‘display’, ‘block’);
        }
        else {
            div.css(‘height’, ’25px’);
            div.css(‘width’, ‘auto’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’18px’);
            div.css(‘padding-top’, ‘0px’);
            div.css(‘padding-bottom’, ‘0px’);
            div.find(‘span’).css(‘display’, ‘none’);
        }
    }

    window.changeSlide = function (item) {
        //get all title containers
        var listItems = document.querySelectorAll(‘.titleContainerClass’);
        var currentIndexVals = { 0: null, 1: null, 2: null, 3: null };
        var newIndexVals = { 0: null, 1: null, 2: null, 3: null };

        for (var i = 0; i < listItems.length; i++) {
            //current Index
            currentIndexVals[i] = parseInt(listItems[i].getAttribute(‘data-currentidx’));
        }

        var selectedIndex = 0; //selected Index will always be 0
        var leftOffset = ”;
        var originalSelectedIndex = ”;

        var nextSelected = ”;
        var originalNextIndex = ”;

        if (item == null) {
            var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
            originalSelectedIndex = parseInt(item0.getAttribute(‘data-originalidx’));
            originalNextIndex = originalSelectedIndex + 1;
            nextSelected = currentIndexVals[0] + 1;
        }
        else {
            nextSelected = item.getAttribute(‘data-currentidx’);
            originalNextIndex = item.getAttribute(‘data-originalidx’);
        }

        if (nextSelected == 0) { return; }

        for (i = 0; i < listItems.length; i++) {
            if (currentIndexVals[i] == selectedIndex) {
                //this is the selected item, so move to bottom and animate
                var div = $(‘[data-currentidx=”0″]’);
                cssTitle(div, false);
                div.css(‘left’, ‘-400px’);
                div.css(‘top’, ‘230px’);

                newIndexVals[i] = 3;
                var item0 = document.querySelector(‘[data-currentidx=”0″]’);
                originalSelectedIndex = item0.getAttribute(‘data-originalidx’);

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else if (currentIndexVals[i] == nextSelected) {
                //this is the NEW selected item, so resize and slide in as selected
                var div = $(‘[data-currentidx=”‘ + nextSelected + ‘”]’);
                cssTitle(div, true);
                div.css(‘left’, ‘-610px’);

                newIndexVals[i] = 0;

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else {
                //move up in queue
                var curIdx = currentIndexVals[i];
                var div = $(‘[data-currentidx=”‘ + curIdx + ‘”]’);

                var topStr = div.css(‘top’);
                var topInt = parseInt(topStr.substring(0, topStr.length – 1));

                if (curIdx != 1 && nextSelected == 1 || curIdx > nextSelected) {
                    topInt = topInt – 35;
                    if (curIdx – 1 == 2) { newIndexVals[i] = 2 };
                    if (curIdx – 1 == 1) { newIndexVals[i] = 1 };
                }

                //move up
                div.animate(
                    { top: topInt }, 500, function () {
                    });
            }
        };

        if (originalNextIndex < 0)
            originalNextIndex = itemCount – 1;

        //adjust pictures
        $(‘#divPic_’ + originalNextIndex).css(‘left’, ‘610px’);
        leftOffset = ‘-610px’;

        $(‘#divPic_’ + originalSelectedIndex).animate(
            { left: leftOffset }, 500, function () {
            });

        $(‘#divPic_’ + originalNextIndex).animate(
            { left: ‘0px’ }, 500, function () {
            });

        var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
        var item1 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[1] + ‘”]’);
        var item2 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[2] + ‘”]’);
        var item3 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[3] + ‘”]’);
        if (newIndexVals[0] != null) { item0.setAttribute(‘data-currentidx’, newIndexVals[0]) };
        if (newIndexVals[1] != null) { item1.setAttribute(‘data-currentidx’, newIndexVals[1]) };
        if (newIndexVals[2] != null) { item2.setAttribute(‘data-currentidx’, newIndexVals[2]) };
        if (newIndexVals[3] != null) { item3.setAttribute(‘data-currentidx’, newIndexVals[3]) };
    };
</script>

 

End-result of script editors in SharePoint Online

Separate authoring site collection

Final Thoughts

How To : Create, Edit and Maintaining a Coded UI Test for Silverlight Application

Using the Microsoft Visual Studio 2013 Coded UI Test plugin for Silverlight, you can create Coded UI Tests or action recordings for Silverlight 5.0 applications.

Using Microsoft Microsoft Visual Studio 2010 Feature Pack 2, you can create coded UI tests or action recordings for Silverlight 4 applications. Action recordings let you fast forward through steps in a manual test. For more information about action recordings or coded UI tests, see How to: Create an Action Recording or How to: Create a Coded UI Test.

In this walkthrough, you will learn the procedures that are required to test a Silverlight control in a Silverlight based application. The walkthrough takes you through the following procedures:

Prerequisites

 

For this walkthrough you will need:

To prepare the walkthrough

  1. Verify that you have the Silverlight 4 developer runtime available at Silverlight Developer 4 for Developers.

  2. Verify that you have completed the procedures in Walkthrough: Creating a RIA Services Solution.

    The result will be a simple Silverlight application that uses a Silverlight grid control. Later, you will use the grid control in this walkthrough and perform coded UI tests on it.

  3.  

     

    For more information about supported and unsupported Silverlight controls, see How to: Set Up Your Silverlight Application for Testing.

  4. With the RIAServicesExample you created in Walkthrough: Creating a RIA Services Solution running, copy the address of the Web application to the clipboard or a notepad file. For example, the address might resemble this: http://localhost: <port number>/RIAServicesExampleTestPage.aspx.

Add the SilverlightUIAutomationHelper.dll to Your Silverlight 4 Project

 

To test your Silverlight applications, you must add Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper.dll as a reference to your Silverlight 4 application so that the Silverlight controls can be identified. This helper assembly instruments your Silverlight application to enable the information about a control to be available to the Silverlight plugin API that you use in your coded UI test or is used for an action recording.This assembly cannot be redistributed. Therefore, you must add this reference conditionally when you want to build the application. By taking this approach the assembly is not redistributed when you deploy your software to a customer.

To add the SilverlightUIAutomationHelper.dll

  1. For each Silverlight project in your solution that you want to test, you must add the SilverlightUIAutomationHelper.dll. In Solution Explorer, right-click the RIAServicesExample project, select Unload Project.

    The project is displayed in Solution Explorer as RIAServicesExample (unavailable).

  2. Right-click the project again and then click Edit RIAServicesExample.csproj.

    The RIAServicesExample.csproj file is opened in the Code Editor. You will see <PropertyGroup> nodes followed by <ItemGroup> nodes. You must make the following two modifications:

    1. To set the production condition, add the following entry to the first <PropertyGroup> node:

       
      <Production Condition="'$(Production)'==''">False</Production>
      
    2. To add the DLL when the build is not a production build, insert the following <Choose> node after the <PropertyGroup> nodes, but before the <ItemGroup> nodes:

       
      <Choose>
         <When Condition=" '$(Production)'=='False' ">
               <ItemGroup>
                 <Reference Include="Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper">
                 </Reference>
               </ItemGroup>
             </When>
       </Choose>
      
  3. To save the file, click Save.

  4. To reload these changes, right-click the server project and then click Reload Project

    Caution noteCaution

    If you have multiple Silverlight projects that you want to test, you must follow these steps for each project.

    Important noteImportant

    To remove the SilverlightUIAutomationHelper.dll so that it is not redistributed with your production code, set the production condition value to true in the first <PropertyGroup> node. In in this manner, the DLL is no longer added as a reference by the Choose node that you added to the project in the previous procedure. You can also set an environment variable named Production to the value True. Then you can use msbuild to build the Silverlight project and remove the SilverlightUIAutomationHelper.dll.

Create a Coded UI Test for RIAServicesExample Silverlight Application

 

To Create a Coded UI Test

  1. In Solution Explorer, right-click the solution, click Add and then select New Project.

    The Add New Project dialog box appears.

  2. In the Installed Templates pane, expand either Visual C# or Visual Basic, and then select Test.

  3. In the middle pane, select the Test Project template.

  4. Click OK.

    In Solution Explorer, the new test project named TestProject1 is added to your solution. Either the UnitTest1.cs or UnitTest1.vb file appears in the Code Editor. You can close the UnitTest1 file because it is not used in this walkthrough.

  5. In Solution Explorer, right-click TestProject1, click Add and then select Coded UI test.

    The Generate Code for Coded UI Test dialog box appears.

  6. Select the Record actions, edit UI map or add assertions option and then click OK.

    The UIMap – Coded UI Test Builder appears.

    For more information about the options in the dialog box, see How to: Create a Coded UI Test.

  7. Click Start Recording on the UIMap – Coded UI Test Builder. In several seconds, the Coded UI Test Builder will be ready.

    Start recording UI

  8. Launch Internet Explorer.

  9. In Internet Explorer’s address bar, enter the address of the Web application that you copied in a previous procedure. For example:

    http://localhost: <port number>/RIAServicesExampleTestPage.aspx

  10. Click one or two of the column headers to sort the data.

  11. Close Internet Explorer.

  12. On the UIMap – Coded UI Test Builder, click Generate Code.

  13. In the Method Name type SimpleSilverlightAppTest and then click Add and Generate. In several seconds, the Coded UI test appears and is added to the Solution.

  14. Close the UIMap – Coded UI Test Builder.

    The CodedUITest1.cs file appears in the Code Editor.

    NoteNote

    You can assign a unique automation property based on the type of Silverlight control in your application. For more information, see Set a Unique Automation Property for Silverlight Controls for Testing.

Run the Coded UI Test on the RIAServicesExample Silverlight Application

 

To run the coded UI test

  • On the Test menu, select Windows and then click Test View.In Test View, select CodedUITestMethod1 under the Test Name column and then click Run Selection in the toolbar.

    The coded UI test should successfully run using the Silverlight data grid control.

A Look At : SharePoint 2013 Site Templates

hero-for-hire_basic-layout_600
SharePoint 2013 offers a vast variety of out-of-the-box site templates. One of the success factors of your SharePoint deployment is choosing the most suitable site template that meets your business needs.

I’ve been asked many times which site template can serve particular required needs and what differs one template from another, so I decided to write a quick overview of all the available SharePoint 2013 site templates and their common uses.

Collaboration Site Templates

  • Team Site – The most common SharePoint site template, mainly used by teams to collaborate, organize, create, and share information and documents.

  • Blog – a site on which a user or group of users write opinions and share information.

  • Developer Site – this site template is focused on Apps for Office development. Developers can build, test and publish their apps here.

  • Project Site – this site template is used for managing and collaborating on a project. Project site coordinates project status and all additional information relevant to the project.

  • Community Site – a site where the community members can explore, discover content and discuss common topics.

 

Enterprise Site Templates

  • Document Center – this site is used to centrally manage documents in your enterprise.

  • eDiscovery Center – this site is used to manage, search and export content for investigations matters.

  • Records Center – this site is used to submit and find important documents that should be stored for long-term archival.

  • Business Intelligence Center – this site is used for providing access to Business Intelligence content in SharePoint.

  • Enterprise Search Center – this site delivers an enterprise search experience.  Users can access the enterprise search center to perform general searches, people searches, conversation or video searches, all in one place. You can easily customize search results pages.

  • My Site Host – this site is used for hosting public profile pages and personal sites. This site can be available after configuration of the User Profile Service Application.

  • Community Portal – this site is used for discovering new communities across the enterprise.

  • Basic Search Center – this site is delivering the basic search experience.

  • Visio Process Repository – this site allows you sharing and viewing Visio process diagrams.

Publishing Site Templates

  • Publishing Portal – this site template is used for an internet-facing sites or a large intranet portals.

  • Enterprise Wiki – this site is used for publishing knowledge that you want to share across the enterprise.

  • Product Catalog – this site is used for managing product catalogs.

If none of those SharePoint site templates meets your needs you can always create custom templates.

 

This will be the focus of a future blog post as I am busy finishing a FREE Custom Knowledge Base Site Template

Some of the features will include :

  • Creating an ALM web and site template, setup life cycle management and deployment
  • Advanced functionality using Managed Metadata and BCS
  • Document Conversion using Word Automation Services
  • Using the search to build out our feature functionality
  • An Office 365 and SharePoint Online version

 

How To : Using the Proxy Pattern to implement Code Access Security

There are many ways to secure different parts of your application. The security of running code in .NET revolves around the concept of Code Access Security (CAS).

CAS

CAS determines the trustworthiness of an assembly based upon its origin and the characteristics of the assembly itself, such as its hash value. For example, code installed locally on the machine is more trusted than code downloaded from the Internet. The runtime will also validate an assembly’s metadata and type safety before that code is allowed to run.

 

There are many ways to write secure code and protect data using the .NET Framework. In this chapter, we explore such things as controlling access to types, encryption and decryption, random numbers, securely storing data, and using programmatic and declarative security.

 

Controlling Access to Types in a Local Assembly

 

Problem

You have an existing class that contains sensitive data, and you do not want clients to have direct access to any objects of this class. Instead, you want an intermediary object to talk to the clients and to allow access to sensitive data based on the client’s credentials. What’s more, you would also like to have specific queries and modifications to the sensitive data tracked, so that if an attacker manages to access the object, you will have a log of what the attacker was attempting to do.

Solution

Use the proxy design pattern to allow clients to talk directly to a proxy object. This proxy object will act as gatekeeper to the class that contains the sensitive data. To keep malicious users from accessing the class itself, make it private, which will at least keep code without the ReflectionPermissionFlag. MemberAccess access (which is currently given only in fully trusted code scenarios such as executing code interactively on a local machine) from getting at it.

The namespaces we will be using are:

 
 
  using System;
    using System.IO;
    using System.Security;
    using System.Security.Permissions;
    using System.Security.Principal;

Let’s start this design by creating an interface, as shown in Example 17-1, “ICompanyData interface”, that will be common to both the proxy objects and the object that contains sensitive data.

Example 17-1. ICompanyData interface

 
 
internal interface ICompanyData
{
    string AdminUserName
    {
        get;
        set;
    }

    string AdminPwd
    {
        get;
        set;
    }
    string CEOPhoneNumExt
    {
        get;
        set;
    }
    void RefreshData();
    void SaveNewData();
}

The CompanyData class shown in Example 17-2, “CompanyData class” is the underlying object that is “expensive” to create.

Example 17-2. CompanyData class

 
 
internal class CompanyData : ICompanyData
{
    public CompanyData()
    {
        Console.WriteLine("[CONCRETE] CompanyData Created");
        // Perform expensive initialization here.
        this.AdminUserName ="admin";
        this.AdminPd ="password";

        this.CEOPhoneNumExt ="0000";
    }

    public str ing AdminUserName
    {
        get;
        set;
    }

    public string AdminPwd
    {
        get;
        set;
    }

    public string CEOPhoneNumExt
    {
        get;
        set;
    }

    public void RefreshData()
    {
        Console.WriteLine("[CONCRETE] Data Refreshed");
    }

    public void SaveNewData()
    {
        Console.WriteLine("[CONCRETE] Data Saved");
    }
}

The code shown in Example 17-3, “CompanyDataSecProxy security proxy class” for the security proxy class checks the caller’s permissions to determine whether the CompanyData object should be created and its methods or properties called.

Example 17-3. CompanyDataSecProxy security proxy class

 
 
public class CompanyDataSecProxy : ICompanyData
{
    public CompanyDataSecProxy()
    {
        Console.WriteLine("[SECPROXY] Created");

        // Must set principal policy first.
        appdomain.CurrentDomain.SetPrincipalPolicy(PrincipalPolicy.
           WindowsPrincipal);
    }

    private ICompanyData coData = null;
    private PrincipalPermission admPerm =
        new PrincipalPermission(null, @"BUILTIN\Administrators", true);
    private PrincipalPermission guestPerm =
        new Pr incipalPermission(null, @"BUILTIN\Guest", true);
    private PrincipalPermission powerPerm =
        new PrincipalPermission(null, @"BUILTIN\PowerUser", true);
    private PrincipalPermission userPerm =
        new PrincipalPermission(null, @"BUILTIN\User", true);

    public string AdminUserName
    {
        get
        {
            string userName = ";
            try
            {
                admPerm.Demand();
                Startup();
                userName =coData.AdminUserName;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminUserName_get failed! {0}",e.ToString());
            }
            return (userName);
        }
        set
            {
            try
            {
                admPerm.Demand();
                Startup();
                coData.AdminUserName = value;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminUserName_set failed! {0}",e.ToString());
            }
        }
    }

    public string AdminPwd
    {
        get
        {
            string pwd = ";
            try
            {
                admPerm.Demand();
                Startup();
                pwd = coData.AdminPwd;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminPwd_get Failed! {0}",e.ToString());
            }

            return (pwd);
        }
        set
        {
            try
            {
                admPerm.Demand();
                Startup();
                coData.AdminPwd = value;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminPwd_set Failed! {0}",e.ToString());
            }
        }
    }

    public string CEOPhoneNumExt
    {
        get
        {
            string ceoPhoneNum = ";
            try
            {
                admPerm.Union(powerPerm).Demand();
                Startup();
                ceoPhoneNum = coData.CEOPhoneNumExt;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("CEOPhoneNum_set Failed! {0}",e.ToString());
            }
            return (ceoPhoneNum);
        }
        set
        {
            try
            {
                admPerm.Demand();
                Startup();
                coData.CEOPhoneNumExt = value;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("CEOPhoneNum_set Failed! {0}",e.ToString());
            }
        }
    }
    public void RefreshData()
    {
        try
        {
            admPerm.Union(powerPerm.Union(userPerm)).Dem and();
            Startup();
            Console.WriteLine("[SECPROXY] Data Refreshed");
            coData.RefreshData();
        }
        catch(SecurityException e)
        {
            Console.WriteLine("RefreshData Failed! {0}",e.ToString());
        }
    }

    public void SaveNewData()
    {
        try
        {
            admPerm.Union(powerPerm).Demand();
            Startup();
            Console.WriteLine("[SECPROXY] Data Saved");
            coData.SaveNewData();
        }
        catch(SecurityException e)
        {
            Console.WriteLine("SaveNewData Failed! {0}",e.ToString());
        }
    }

    // DO NOT forget to use [#define DOTRACE] to control the tracing proxy.
    private void Startup()
    {
        if (coData == null)
        {
#if (DOTRACE)
            coData = new CompanyDataTraceProxy();
#else
            coData = new CompanyData();
#endif
            Console.WriteLine("[SECPROXY] Refresh Data");
            coData.RefreshData();
        }
    }
}

When creating thePrincipalPermissions as part of the object construction, you are using string representations of the built-in objects (“BUILTIN\Administrators”) to set up the principal role. However, the names of these objects may be different depending on the locale the code runs under. It would be appropriate to use the WindowsAccountType.Administrator enumeration value to ease localization because this value is defined to represent the administrator role as well. We used text here to clarify what was being done and also to access the PowerUsers role, which is not available through the WindowsAccountType enumeration.

If the call to the CompanyData object passes through the CompanyDataSecProxy, then the user has permissions to access the underlying data. Any access to this data may be logged, so the administrator can check for any attempt to hack the CompanyData object. The code shown in Example 17-4, “CompanyDataTraceProxy tracing proxy class” is the tracing proxy used to log access to the various method and property access points in the CompanyData object (note that the CompanyDataSecProxy contains the code to turn this proxy object on or off).

Example 17-4. CompanyDataTraceProxy tracing proxy class

 
 
public class CompanyDataTraceProxy : ICompanyData
{    
    public CompanyDataTraceProxy() 
    {
        Console.WriteLine("[TRACEPROXY] Created");
        string path = Path.GetTempPath() + @"\CompanyAccessTraceFile.txt";
        fileStream = new FileStream(path, FileMode.Append,
            FileAccess.Write, FileShare.None);
        traceWriter = new StreamWriter(fileStream);
        coData = new CompanyData();
    }

    private ICompanyData coData = null;
    private FileStream fileStream = null;
    private StreamWriter traceWriter = null;

    public string AdminPwd
    {
        get
        {
            traceWriter.WriteLine("AdminPwd read by user.");
            traceWriter.Flush();
            return (coData.AdminPwd);
        }
        set
        {
            traceWriter.WriteLine("AdminPwd written by user.");
            traceWriter.Flush();
            coData.AdminPwd = value;
        }
    }

    public string AdminUserName
    {
        get
        {
            traceWriter.WriteLine("AdminUserName read by user.");
            traceWriter.Flush();
            return (coData.AdminUserName);
        }
        set
        {
            traceWriter.WriteLine("AdminUserName written by user.");
            traceWriter.Flush(); 
            coData.AdminUserName = value;
        }
    }

    public string CEOPhoneNumExt
    {
        get
        {
            traceWriter.WriteLine("CEOPhoneNumExt read by user.");
            traceWriter.Flush();
            return (coData.CEOPhoneNumExt);
        }
        set
        {
            traceWriter.WriteLine("CEOPhoneNumExt written by user.");
            traceWriter.Flush();
            coData.CEOPhoneNumExt = value;
        }
    }

    public void RefreshData()
    {
        Console.WriteLine("[TRACEPROXY] Refresh Data");
        coData.RefreshData();
    }

    public void SaveNewData()
    {
        Console.WriteLine("[TRACEPROXY] Save Data");
        coData.SaveNewData();
    }
}

The proxy is used in the following manner:

 
 
   // Create the security proxy here.
    CompanyDataSecProxy companyDataSecProxy = new CompanyDataSecProxy( );

    // Read some data.
    Console.WriteLine("CEOPhoneNumExt: " + companyDataSecProxy.CEOPhoneNumExt);

    // Write some data.
    companyDataSecProxy.AdminPwd = "asdf";
    companyDataSecProxy.AdminUserName = "asdf";

    // Save and refresh this data.
    companyDataSecProxy.SaveNewData( );
    companyDataSecProxy.RefreshData( );

Note that as long as the CompanyData object was accessible, you could have also written this to access the object directly:

 
 
    // Instantiate the CompanyData object directly without a proxy.
    CompanyData companyData = new CompanyData( );

    // Read some data.
    Console.WriteLine("CEOPhoneNumExt: " + companyData.CEOPhoneNumExt);

    // Write some data.
    companyData.AdminPwd = "asdf";
    companyData.AdminUserName = "asdf";

    // Save and refresh this data.
    companyData.SaveNewData();
    companyData.RefreshData();

If these two blocks of code are run, the same fundamental actions occur: data is read, data is written, and data is updated/refreshed. This shows you that your proxy objects are set up correctly and function as they should.

Discussion

The proxy design pattern is useful for several tasks. The most notable-in COM, COM+, and .NET remoting-is for marshaling data across boundaries such as AppDomains or even across a network. To the client, a proxy looks and acts exactly the same as its underlying object; fundamentally, the proxy object is just a wrapper around the object.

 

A proxy can test the security and/or identity permissions of the caller before the underlying object is created or accessed. Proxy objects can also be chained together to form several layers around an underlying object. Each proxy can be added or removed depending on the circumstances.

 

For the proxy object to look and act the same as its underlying object, both should implement the same interface. The implementation in this recipe uses an ICompanyData interface on both the proxies (CompanyDataSecProxy and CompanyDataTraceProxy) and the underlying object (CompanyData). If more proxies are created, they, too, need to implement this interface.

 

The CompanyData class represents an expensive object to create. In addition, this class contains a mixture of sensitive and nonsensitive data that requires permission checks to be made before the data is accessed. For this recipe, the CompanyData class simply contains a group of properties to access company data and two methods for updating and refreshing this data. You can replace this class with one of your own and create a corresponding interface that both the class and its proxies implement.

 

The CompanyDataSecProxy object is the object that a client must interact with. This object is responsible for determining whether the client has the correct privileges to access the method or property that it is calling. The get accessor of the AdminUserName property shows the structure of the code throughout most of this class:

 
 
  public string AdminUserName
    {
        get
        {
            string userName = ";
            try
            {
                admPerm.Demand( );
                Startup( );
                userName = coData.AdminUserName;
            }
            catch(SecurityException e)
            {
               Console.WriteLine("AdminUserName_get ailed!: {0}",e.ToString( ));
            }
            return (userName);
        }
        set
        {
            try
            {
                admPerm.Demand( );
                Startup( );
                coData.AdminUserName = value;
            }
            catch(SecurityException e)
            {
               Console.WriteLine("AdminUserName_set Failed! {0}",e.ToString( ));
            }
        }
    }

Initially, a single permission (AdmPerm) is demanded. If this demand fails, a SecurityException, which is handled by the catch clause, is thrown. (Other exceptions will be handed back to the caller.) If the Demand succeeds, the Startup method is called. It is in charge of instantiating either the next proxy object in the chain (CompanyDataTraceProxy) or the underlying CompanyData object. The choice depends on whether the DOTRACE preprocessor symbol has been defined. You may use a different technique, such as a registry key to turn tracing on or off, if you wish.

 

This proxy class uses the private field coData to hold a reference to an ICompanyData type, which can be either a CompanyDataTraceProxy or the CompanyData object. This reference allows you to chain several proxies together.

In the marketSenior C# & SharePoint Developer in the market

10 years experience in various industries

BSC Degree in Computer Science (Cum Laude)

tomas.floyd@outlook.com

https://sharepointsamurai.wordpress.com

 

The CompanyDataTraceProxy simply logs any access to the CompanyData object’s information to a text file. Since this proxy will not attempt to prevent a client from accessing the CompanyData object, the CompanyData object is created and explicitly called in each property and method of this object.

Windows 8.1 Updated Reources and Tools

With Windows 8.1 also come lots of updates to the tools and templates that you can use to create Windows Store apps. These updates can help cut down the work in your development and test cycles.

 

Get the updated tools described below at our Windows 8.1 page.

 w81_intro_2

New or updated in Windows 8.1

General updates

Area Description of update
Support for updating your Windows Store apps to Windows 8.1. Migrate your Windows 8 app to Windows 8.1. This may first require updating your app code for Windows 8.1.
Windows Store app templates We’ve updated all templates for Windows 8.1, and we’ve added a new Hub template too.
Azure Mobile Services and push notification wizards
  • The Services Manager makes it easy to connect your app to Azure Mobile Services or Microsoft Advertising.
  • The push notification wizard makes it easy to set up a Azure Mobile Service to send push notifications to your app.
App bundle support Now you can combine resource packages (like multiple scales, languages, or Microsoft Direct3D feature levels) into a single .appxbundle file for submission to the Windows Store. For your customers, this means that your app is only deployed with the resources they need for their device and locale.
App validation on a remote device The Create App Package Wizard in Microsoft Visual Studio 2013 now makes it easy to validate your app using Windows App Certification Kit 3.0 on a remote device (such as Windows RT PCs).
Create coded UI tests using XAML Write automated functional tests for testing Windows Store apps using XAML and the cross-hair tool.

Note  Touch interactions are now supported for controls.

New Visual Studio theme/ and Visual Design We’ve added a third theme, Blue, to the existing Light and Dark themes. The Blue theme offers a mid-range color scheme reminiscent of Microsoft Visual Studio 2010.

Also, based on user feedback, we’ve enhanced all themes with additional color and clarity in icons, revised icons, more contrast across the development environment , and clearer segmentation of regions within the environment.

 

Diagnostics

Area Description of update
Mixed-language debugging For Windows Store apps that use JavaScript and C++, the debugger now lets you set breakpoints in either language and provides a call stack with both JavaScript and C++ functions.
Managed app debugging The debugger now displays return values. You can use Edit and Continue in 64-bit managed apps. Exceptions that come from Windows Store apps preserve information about the error, even across language boundaries.
Asynchronous debugging improvements The call-stack window now includes the creation stack if you stop in an asynchronous method.
Native “Just My Code” For native code, the call stack simplifies debugging by displaying only the code that you’ve created.
DOM Explorer
  • The Cascading Style Sheets (CSS) editor supports improved editing, Microsoft IntelliSense, inline style support, shorthand, specificity, and notification of invalid properties.
  • The Computed and Styles panes have been enhanced.
  • The DOM Explorer supports search, editing as HTML, IntelliSense, and undo stacks.
JavaScript Console The console now supports object preview and visualization, new APIs, multiline function support, IntelliSense, evaluation of elements as objects or HTML, and legacy document modes.
JavaScript Memory Profiler
  • Dominators view shows memory allocation retained by each object.
  • The profiler notifies you of potential memory leaks caused by detached or disconnected DOM nodes.
JavaScript UI Responsiveness
  • The Details pane includes hyperlinks to event source locations, plus a chart showing the percentage of time that each child event contributed to the selected event’s overall duration.
  • You can now expand instances of Layout and Style calculation events to display the HTML elements that were affected by the operation.
XAML UI Responsiveness For C#/VB/C++ XAML-based Windows Store apps, the XAML UI Responsiveness tool allows you to diagnose performance issues related to app startup and page navigation, panning and scrolling, and input responsiveness in general.

 

JavaScript editor

Area Description of update
Completion of enclosing character pairs The editor automatically inserts the closing character when you type a left brace (“{“), parenthesis (“(“), bracket (“[“), single quotation mark (“`”), or (“””). A smart auto-format and indent of your source is also made as it auto-completes.
Editor navigation bar This new UI feature helps you identify and move through the important elements in your source code. New for JavaScript developers, the navigation bar will highlight important functions and objects in your source.
Deprecation notes in IntelliSense. If a Windows API element has been deprecated in Windows 8.1, IntelliSense tooltips identify it as “[deprecated]”.
Go To Definition for namespaces You can right-click a namespace you use in your code (such as WinJS.UI) and then click Go To Definition to be taken to the line where that namespace is defined.
Identifier highlighting Select an identifier (for example, a variable, parameter, or function name) in your source and any uses of that identifier will be highlighted in your source code.

 

C++ development

Area Description of update
Windows Store app development for Windows 8.1
  • Boxed types in value structs

    You can now define value types by using fields that can be null—for example, IBox<int>^ as opposed to int. This means that the fields can either have a value, or be equal to nullptr.

  • Richer exception information

    C++/CX supports the new Windows error model that enables the capture and propagation of rich exception information across the Application Binary Interface (ABI); this includes call stacks and custom message strings.

  • Object::ToString is now virtual

    You can now override ToString() in user-defined Windows Runtime ref types.

C++11 standard compliance Compiler support for ISO C++11 language features

  • Default template arguments for function templates
  • Delegating constructors
  • Explicit conversion operators
  • Initializer lists and uniform initialization
  • Raw string literals
  • Variadic templates

Updated Standard Template Library (STL) to use the latest C++11 features Improvements to C99 libraries

  • C99 functionality added to
  • Complex math functions in new header, <complex.h>
  • Integer type support in new header, ; includes format string support for “hh”
  • Support for variable-argument scanf forms in . C99 variants of vscanf, strtoll, vwscanf/wcstoll, and isblank/iswblank are implemented.
  • New conversion support for long long and long double in <stdlib.h>
C++ REST SDK Modern C++ implementation of Representational State Transfer (REST) services. For more info see C++ REST SDK (codename “Casablanca”).
C++ Azure Mobile Services SDK The shortest path to a connected C++ app with a Azure backend.
C++ AMP SxS CPU/GPU debugging (for WARP accelerator), enhanced texture support (mipmaps and new sampling modes), and improved diagnostics and exceptions.
IDE productivity features
  • Improved code formatting.
  • Brace completion.
  • Auto-generation of event handler code in C++/CX and C++/CLI.
  • Context-based member list filtering.
  • Parameter help scrolling.
  • Toggle header/code file.
  • Resizable C++ project-properties window.
  • Faster builds. Numerous optimizations and multi-core utilization make builds faster, especially for large projects. Incremental builds for C++ apps that have references to C++ WinMD are also much faster.
App performance
  • Pass vector type arguments by using the __vectorcall calling convention to use vector registers.
  • Reduction or elimination of CPU/GPU data transfer in C++ AMP.
  • Auto-vectorization improvements.
  • C++/CX optimizations in allocations and casting.
  • Performance tuning of C++ AMP runtime libraries.
  • New: PGO for Windows Store app development.
Build-time performance enhancements Compiler throughput improvements for highly parallel builds.

 

 

HTML design tools

Area Description of update
CSS animation The timeline editor helps you create CSS animations.
JavaScript behaviors Add JavaScript event listeners to any element without writing code. Choose from a list of supplied event handlers or create your own.
Custom font embedding Create a branded experience by using custom fonts for HTML text.
Data binding Set the data binding for any template.
Rules and guides Create custom guides.
Border radius Easy-to-use handles on each element help you create rounded corners and ellipses.
Searching and setting CSS properties The search box lets you set CSS property values directly and quickly.
Finding elements with CSS syntax The live DOM search now supports CSS syntax. For example, you can automatically select all elements with class “myclass” by searching for “.myclass”.

 

XAML design tools

Area Description of update
XAML editor improvements The XAML editor in Visual Studio 2013 includes IntelliSense for data bindings and resources, smart commenting, and Go To Definition.
Rulers and guides Create custom guides.
Better style editing support Edit styles and templates in the context of the document where they’re used, even if they’re actually defined in another, shared location.
Sample data support The data panel enhances sample data support in XAML projects for the Windows Store. This includes the ability to create sample data from JSON content. For an example of how to set this up, see the updated Windows Store app project templates for XAML.
View state authoring The device panel in Blend for Microsoft Visual Studio 2013 and Visual Studio 2013 supports updated view states properties and requirements to support variable minimum widths.

 

Windows App Certification Kit 3.0

Use the latest version of the Windows App Certification Kit to test the readiness of Windows Store apps for Windows 8 and Windows 8.1 before on-boarding, and for the Windows 7, Windows 8, and Windows 8.1 Windows Desktop App Certification.

We’ve also updated the Windows App Certification Kit to give you a smooth experience. For example, you can now run tests in parallel to save time, and you have more flexibility in selecting the tests you run.

New validation tests

As with previous releases of Windows, we’ve revised the kit content to include more validation, helping to make sure that Windows apps running on the latest update are well behaved. Here’s a high-level breakdown of the new tests.

Test Description
Direct3D additional check Validates apps for compliance with Direct3D requirements, and ensures that apps using C++ and XAML are calling a new Trim method upon their suspend callback.
Supported directory structure Ensures that apps don’t create a structure on disk that results in files longer than MAX_PATH (260 characters).
File extensions and protocols Limits the number of file extensions and protocols that an app can register.
Platform appropriate files Checks for packages that contain cross-architecture binaries.
Banned file check Checks apps for use of outdated or prerelease components known to have security vulnerabilities.
JavaScript background tasks Verifies that apps that use JavaScript have the proper close statement in the background task, so the app doesn’t consume battery power unnecessarily.
Framework dependency rules Ensures that apps are taking the right framework dependencies for Windows 8 and Windows 8.1.

 

Test reports

We’ve made a number of changes to the test report generated by the Windows App Certification Kit. These reports include new information, are easier to read, and provide more links to resources that can help you resolve issues. Significant additions and updates include:

  • Expanded error-message details.
  • Actionable info for supported and deprecated APIs.
  • Details about the configuration of the current test device.
  • A language toggle (if the report is localized).

For more information on how to use this kit, see Using the Windows App Certification Kit.

How To : Use the Microsoft Monitoring Agent to Monitor apps in deployment

You can locally monitor IIS-hosted ASP.NET web apps and SharePoint 2010 or 2013 applications for errors, performance issues, or other problems by using Microsoft Monitoring Agent. You can save diagnostic events from the agent to an IntelliTrace log (.iTrace) file. You can then open the log in Visual Studio Ultimate 2013 to debug problems with all the Visual Studio diagnostic tools.

If you use System Center 2012, use Microsoft Monitoring Agent with Operations Manager to get alerts about problems and create Team Foundation Server work items with links to the saved IntelliTrace logs. You can then assign these work items to others for further debugging.

See Integrating Operations Manager with Development Processes and Monitoring with Microsoft Monitoring Agent.

Before you start, check that you have the matching source and symbols for the built and deployed code. This helps you go directly to the application code when you start debugging and browsing diagnostic events in the IntelliTrace log. Set up your builds so that Visual Studio can automatically find and open the matching source for your deployed code.

  1. Set up Microsoft Monitoring Agent.
  2. Start monitoring your app.
  3. Save the recorded events.
Set up the standalone agent on your web server to perform local monitoring without changing your application. If you use System Center 2012, see Installing Microsoft Monitoring Agent.

Set up the standalone agent

  1. Make sure that:
  2. Download the free Microsoft Monitoring Agent, either the 32-bit version MMASetup-i386.exe or 64-bit version MMASetup-AMD64.exe, from the Microsoft Download Center to your web server.
  3. Run the downloaded executable to start the installation wizard.
  4. Create a secure directory on your web server to store the IntelliTrace logs, for example, C:\IntelliTraceLogs.

    Make sure that you create this directory before you start monitoring. To avoid slowing down your app, choose a location on a local high-speed disk that’s not very active.

     

    Security note Security Note
    IntelliTrace logs might contain personal and sensitive data. Restrict this directory to only those identities that must work with the files. Check your company’s privacy policies.
  5. To run detailed, function-level monitoring or to monitor SharePoint applications, give the application pool that hosts your web app or SharePoint application read and write permissions to the IntelliTrace log directory. How do I set up permissions for the application pool?
  1. On your web server, open a Windows PowerShell or Windows PowerShell ISE command prompt window as an administrator.

     

    Open Windows PowerShell as administrator 

  2. Run the Start-WebApplicationMonitoring command to start monitoring your app. This will restart all the web apps on your web server.

     

    Here’s the short syntax:

     

    Start-WebApplicationMonitoring “<appName>” <monitoringMode> “<outputPath>” <UInt32> “<collectionPlanPathAndFileName>”

     

    Here’s an example that uses just the web app name and lightweight Monitor mode:

     

    PS C:\>Start-WebApplicationMonitoring “Fabrikam\FabrikamFiber.Web” Monitor “C:\IntelliTraceLogs”

     

    Here’s an example that uses the IIS path and lightweight Monitor mode:

     

    PS C:\>Start-WebApplicationMonitoring “IIS:\sites\Fabrikam\FabrikamFiber.Web” Monitor “C:\IntelliTraceLogs”

     

    After you start monitoring, you might see the Microsoft Monitoring Agent pause while your apps restart.

     

    Start monitoring with MMA confirmation 

    “<appName>” Specify the path to the web site and web app name in IIS. You can also include the IIS path, if you prefer.

     

    “<IISWebsiteName>\<IISWebAppName>”

    -or-

    “IIS:\sites \<IISWebsiteName>\<IISWebAppName>”

     

    You can find this path in IIS Manager. For example:

     

    Path to IIS web site and web app 

    You can also use the Get-WebSite and Get WebApplication commands.

    <monitoringMode> Specify the monitoring mode:

     

    • Monitor: Record minimal details about exception events and performance events. This mode uses the default collection plan.
    • Trace: Record function-level details or monitor SharePoint 2010 and SharePoint 2013 applications by using the specified collection plan. This mode might make your app run more slowly.

       

       

      This example records events for a SharePoint app hosted on a SharePoint site:

       

      Start-WebApplicationMonitoring “FabrikamSharePointSite\FabrikamSharePointApp” Trace “C:\Program Files\Microsoft Monitoring Agent\Agent\IntelliTraceCollector\collection_plan.ASP.NET.default.xml” “C:\IntelliTraceLogs”

       

    • Custom: Record custom details by using specified custom collection plan. You’ll have to restart monitoring if you edit the collection plan after monitoring has already started.
    “<outputPath>” Specify the full directory path to store the IntelliTrace logs. Make sure that you create this directory before you start monitoring.
    <UInt32> Specify the maximum size for the IntelliTrace log. The default maximum size of the IntelliTrace log is 250 MB.

    When the log reaches this limit, the agent overwrites the earliest entries to make space for more entries. To change this limit, use the -MaximumFileSizeInMegabytes option or edit the MaximumLogFileSize attribute in the collection plan.

    “<collectionPlanPathAndFileName>” Specify the full path or relative path and the file name of the collection plan. This plan is an .xml file that configures settings for the agent.

    These plans are included with the agent and work with web apps and SharePoint applications:

    • collection_plan.ASP.NET.default.xml

      Collects only events, such as exceptions, performance events, database calls, and Web server requests.

    • collection_plan.ASP.NET.trace.xml

      Collects function-level calls plus all the data in default collection plan. This plan is good for detailed analysis but might slow down your app.

     

    You can find localized versions of these plans in the agent’s subfolders. You can also customize these plans or create your own plans to avoid slowing down your app. Put any custom plans in the same secure location as the agent.

     

    How else can I get the most data without slowing down my app?

     

    For the more information about the full syntax and other examples, run the get-help Start-WebApplicationMonitoring –detailed command or the get-help Start-WebApplicationMonitoring –examples command.

  3. To check the status of all monitored web apps, run the Get-WebApplicationMonitoringStatus command.

A look at the 3 new options in the Task Parallel Library in .Net 4.5

Astute users of the Task Parallel Library might have noticed three new options available across TaskCreationOptions and TaskContinuationOptions in .NET 4.5: DenyChildAttach, HideScheduler, and (on TaskContinuationOptions) LazyCancellation.  I wanted to take a few minutes to share more about what these are and why we added them.

DenyChildAttach

As a reminder, when a Task is created with TaskCreationOptions.AttachedToParent or TaskContinuationOptions.AttachedToParent, the creation code looks to see what task is currently running on the current thread (this Task’s Id is available from the static Task.CurrentId property, which will return null if there isn’t one).  If it finds there is one, the Task being created registers with that parent Task as a child, leading to two additional behaviors: the parent Task won’t transition to a completed state until all of its children have completed as well, and any exceptions from faulted children will propagate up to the parent Task (unless the parent Task observes those exceptions before it completes).  This parent/child relationship and hierarchy is visible in Visual Studio’s Parallel Tasks window.

If you’re responsible for all of the code in your solution, you have control over whether the tasks you create try to attach to a parent task.  But what if your code creates some Tasks, and from those Tasks calls out to code you don’t own?  The code you call might use AttachedToParent and attach children to your tasks.  Did you expect that?  Is your code reliable against that?  Have you done all of the necessary testing to ensure it?

For this situation, we introduced DenyChildAttach.  When a task uses AttachedToParent but finds there is no current Task, it just doesn’t attach to anything, behaving as if AttachedToParent wasn’t supplied.  If there is a parent task, but that parent task was created with DenyChildAttach, the same thing happens: the task using AttachedToParent won’t see a parent and thus won’t attach to anything, even though technically there was a task to which it could have been attached.  It’s a slight of hand or Jedi mind trick: “this is not the parent you’re looking for.”

With 20/20 hindsight, if we could do .NET 4 over again, I personally would chosen to make both sides of the equation opt-in.  Today, the child task gets to opt in to being a child by specifying AttachedToParent, but the parent must opt out if it doesn’t want to be one.  In retrospect, I think it would have been better if both sides had the choice to opt in, with the parent specifying a mythical flag like AllowsChildren to opt in rather than DenyChildAttach to opt out.  Nevertheless, this is just a question of default.  You’ll notice that the new Task.Run method internally specifies DenyChildAttach when creating its Tasks, in affect making this the default for the API we expect to become the most common way of launching tasks.  If you want explicit control over the TaskCreationOptions used, you can instead use the existing Task.Factory.StartNew method, which becomes the more advanced mechanism and allows you to control the options, object state, scheduler, and so on.

HideScheduler

With code written in .NET 4, we saw this pattern to be relatively common:

private void button_Click(…)       {            … // #1 on the UI thread            Task.Factory.StartNew(() =>           {                … // #2 long-running work, so offloaded to non-UI thread            }).ContinueWith(t =>            {                … // #3 back on the UI thread            }, TaskScheduler.FromCurrentSynchronizationContext());        }

In other words, Tasks and continuations became a way to offload some work from the UI thread, and then run some follow-up work back on the UI thread.  This was accomplished by using the TaskScheduler.FromCurrentSynchronizationContext method, which looks up SynchronizationContext.Current and constructs a new TaskScheduler instance around it: when you schedule a Task to this TaskScheduler, the scheduler will then pass the task along to the SynchronizationContext to be invoked.

That’s all well and good, but it’s important to keep in mind the behavior of the Task-related APIs introduced in .NET 4 when no TaskScheduler is explicitly provided.  The TaskFactory class has a bunch of overloaded methods (e.g. StartNew), and when you construct a TaskFactory class, you have the option to provide a TaskScheduler.  Then, when you call one of its methods (like StartNew) that doesn’t take a TaskScheduler, the scheduler that was provided to the TaskFactory’s constructor is used.  If no scheduler was provided to the TaskFactory, then if you call an overload that doesn’t take a TaskScheduler, the TaskFactory ends up using TaskScheduler.Current at the time the call is made (TaskScheduler.Current returns the scheduler associated with whatever Task is currently running on that thread, or if there is no such task, it returns TaskScheduler.Default, which represents the ThreadPool).  Now, the TaskFactory returned from Task.Factory is constructed without a specific scheduler, so for example when you write Task.Factory.StartNew(Action), you’re telling TPL to create a Task for that Action and schedule it to TaskScheduler.Current.

In many situations, that’s the right behavior.  For example, let’s say you’re implementing a recursive divide-and-conquer problem, where you have a task that’s supposed to process some chunk of work, and it in turn subdivides its work and schedules tasks to process those chunks.  If that task was running on a scheduler representing a particular pool of threads, or if it was running on a scheduler that had a concurrency limit, and so on, you’d typically want those tasks it then created to also run on the same scheduler.

However, it turns out that in other situations, it’s not the right behavior.  And one such situation is like that I showed previously.  Imagine now that your code looked like this:

private void button_Click(…)       {            … // #1 on the UI thread            Task.Factory.StartNew(() =>           {                … // #2 long-running work, so offloaded to non-UI thread            }).ContinueWith(t =>            {                … // #3 back on the UI thread                Task.Factory.StartNew(() =>                {                    … // #4 compute-intensive work we want offloaded to non-UI thread (bug!)                });           }, TaskScheduler.FromCurrentSynchronizationContext());        }

This seems logical: we do some work on the UI thread, then we offload some work to the background, when that work completes we hop back to the UI thread, and then we kick off another task to run in the background.  Unfortunately, this is buggy.  Because the continuation was scheduled to TaskScheduler.FromCurrentSynchronizationContext, that scheduler is TaskScheduler.Current during the execution of the continuation.  And in that continuation we’re calling Task.Factory.StartNew using an overload that doesn’t accept a TaskScheduler.  Which means that this compute-intensive work is actually going to be scheduled back to the UI thread! Ugh.

There are of course already solutions to this.  For example, if you own all of this code, you could explicitly specify TaskScheduler.Default (the ThreadPool scheduler) when calling StartNew, or you could change the structure of the code so that the StartNew became a continuation off of the continuation, e.g.

private void button_Click(…)       {            … // #1 on the UI thread            Task.Factory.StartNew(() =>           {                … // #2 long-running work, so offloaded to non-UI thread            }).ContinueWith(t =>            {                … // #3 back on the UI thread           }, TaskScheduler.FromCurrentSynchronizationContext()).ContinueWith(t =>            {                … // #4 compute-intensive work we want offloaded to non-UI thread            });        }

But neither of those solutions are relevant if the code inside of the continuation is code you don’t own, e.g. if you’re calling out to some 3rd party code which might unsuspectingly use Task.Factory.StartNew without specifying a scheduler an inadvertently end up running its code on the UI thread.  This is why in production library code I write, I always explicitly specify the scheduler I want to use.

For .NET 4.5, we introduced the TaskCreationOptions.HideScheduler and TaskContinuationOptions.HideScheduler values.  When supplied to a Task, this makes it so that in the body of that Task, TaskScheduler.Current returns TaskScheduler.Default, even if the Task is running on a different scheduler: in other words, it hides it, making it look like there isn’t a Task running, and thus TaskScheduler.Default is returned.  This option helps to make your code more reliable if you find yourself calling out to code you don’t own. Again with our initial example, I can now specify HideScheduler, and my bug will be fixed:

private void button_Click(…)       {            … // #1 on the UI thread            Task.Factory.StartNew(() =>           {                … // #2 long-running work, so offloaded to non-UI thread            }).ContinueWith(t =>            {                … // #3 back on the UI thread                Task.Factory.StartNew(() =>                {                    … // #4 compute-intensive work we want offloaded to non-UI thread (bug!)                });           }, CancellationToken.None,                TaskContinuationOptions.HideScheduler,               TaskScheduler.FromCurrentSynchronizationContext());        }

One additional thing to note is around the new Task.Run method, which is really just a simple wrapper around Task.Factory.StartNew.  We expect Task.Run to become the most common method for launching new tasks, with developers falling back to using Task.Factory.StartNew directly only for more advanced situations where they need more fine-grained control, e.g. over which scheduler to be targeted.  I already noted that Task.Run specifies DenyChildAttach, so that no tasks created within a Task.Run task can attach to it.  Additionally, Task.Run always specifies TaskScheduler.Default, so that Task.Run always uses the ThreadPool and ignores TaskScheduler.Current.  So, even without HideScheduler, if I’d used Task.Run(Action) instead of Task.Factory.StartNew(Action) in my initially buggy code, it would have been fine.

LazyCancellation

Consider the following code:

Task a = Task.Run(…);       Task b = a.ContinueWith(…, cancellationToken);

The ContinueWith method will create Task ‘b’ such that ‘b’ will be scheduled when ‘a’ completes.  However, because a CancellationToken was provided to ContinueWith, if cancellation is requested before Task ‘a’ completes, then Task ‘b’ will just immediately transition to the Canceled state.  So far so good… there’s no point in doing any work for ‘b’ if we know the user wants to cancel it.  Might as well be aggressive about it.

But now consider a slightly more complicated variation:

Task a = Task.Run(…);       Task b = a.ContinueWith(…, cancellationToken);        Task c = b.ContinueWith(…);

Here there’s a second continuation, off of Task ‘b’, resulting in Task ‘c’.  When Task ‘b’ completes, regardless of what state ‘b’ completes in (RanToCompletion, Faulted, or Canceled), Task ‘c’ will be scheduled.  Now consider the following situation: Task ‘a’ starts running.  Then a cancellation request comes in before ‘a’ finishes, so ‘b’ transitions to Canceled as we’d expect.  Now that ‘b’ is completed, Task ‘c’ gets scheduled, again as we’d expect.  However, this now means that Task ‘a’ and Task ‘c’ could be running concurrently.  In many situations, that’s fine.  But if you’d constructed your chain of continuations under the notion that no two tasks in the chain could ever run concurrently, you’d be sorely disappointed.

Enter LazyCancellation.  By specifying this flag on a continuation that has a CancellationToken, you’re telling TPL to ignore that CancellationToken until the antecedent has already completed.  In other words, the cancellation check is lazy: rather than the continuation doing the work to register with the token to be notified of a cancellation request, it instead doesn’t do anything, and then only when the antecedent completes and the continuation is about to run does it poll the token and potentially transition to Canceled. In our previous example, if I did want to avoid ‘a’ and ‘c’ potentially running concurrently, we could have instead written:

Task a = Task.Run(…);       Task b = a.ContinueWith(…, cancellationToken,              TaskContinuationOptions.LazyCancellation, TaskScheduler.Default);        Task c = b.ContinueWith(…);

Here, even if cancellation is requested early, ‘b’ won’t transition to Canceled until ‘a’ completes, such that ‘c’ won’t be able to start until ‘a’ has completed, and all would be right in the world again.