Category Archives: Microsoft Best Patterns and Practices

How To : Create a Re-Usable News Page Layout using Content Type in SharePoint 2013

Introduction

In a recent project I was asked to consult in, the team needed to create sub site/s for news or events.

Developing for re-usability in SharePoint is something I find is lacking quite a bit in Development teams.

Below I outline the solution I worked out for the project, that is also now a template that the Team can use in any similiar project.

I will explain not only how to do it step by step but also continue to make this page layout as the default page layout of a publishing sub site.

After that, make a content query in the root site to preview the news articles.

Finally, I will be using variation to create a similar publishing sub site in other languages.

  1. Step by step creation of News Page Layout using Content Type in SharePoint 2013.
  2. How to Create a publishing sub site for news and using variation to creating the same site to other languages finally making the previous page layout as the default page layout of the sub site.

Firstly

  1. Open Visual Studio 2013 and a create new project of type SharePoint Solutions…”SharePoint 2013 Empty Project”.
    Create new SharePoint 2013 empty project
  2. As we will deploy our solution as a farm solution in our local farm on our local machine.
    Note: Make sure that the site is a publishing site to be able to proceed.

    Deploy the SharePoint site as a farm solution
  3. Our solution will be as the picture blew and we will add three folders for “SiteColumns”, “ContentTypes” and “PageLayouts”.
    SharePoint solution items
  4. Start by adding a new item to “SiteColumns” folder.
    Adding new item to SharePoint solution
  5. After we adding a new site column and rename it, add the following columns as we need to make the news layout NewsTitle, NewsBody, NewsBrief, NewsDate and NewsImage.
    Adding new item of type site column to the solution

    Then add the below fields and you will note that I use Resources in the DisplayName and the Group.

     <Field
     ID="{9fd593c1-75d6-4c23-8ce1-4e5de0d97545}"
     Name="NewsTitle"
     DisplayName="$Resources:SPWorld_News,NewsTitle;"
     Type="Text"
     Required="TRUE"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>
     <Field
     ID="{fcd9f32e-e2e0-4d00-8793-cfd2abf8ef4d}"
     Name="NewsBrief"
     DisplayName="$Resources:SPWorld_News,NewsBrief;"
     Type="Note"
     Required="FALSE"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>
     <Field
     ID="{FF268335-35E7-4306-B60F-E3666E5DDC07}"
     Name="NewsBody"
     DisplayName="$Resources:SPWorld_News,NewsBody;"
     Type="HTML"
     Required="TRUE"
     RichText="TRUE"
     RichTextMode="FullHtml"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>
     <Field
     ID="{FCA0BBA0-870C-4D42-A34A-41A69749F963}"
     Name="NewsDate"
     DisplayName="$Resources:SPWorld_News,NewsDate;"
     Type="DateTime"
     Required="TRUE"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>
     <Field
     ID="{8218A8D9-912C-47E7-AAD2-12AA10B42BE3}"
     Name="NewsImage"
     DisplayName="$Resources:SPWorld_News,NewsImage;"
     Required="FALSE"
     Type="Image"
     RichText="TRUE"
     RichTextMode="ThemeHtml"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>

    After That

  6. Create Content Type, we will be adding new Content Type to the folder ContentTypes.
    Adding new item of type Content Type to SharePoint solution
  7. We must make sure to select the base of the content type “Page”.
    Specifying the base type of the content type
  8. Open the content type and add our new columns to it.
    Adding columns to the content type
  9. Open the elements file of the content type and make sure it will look like this code below.Note: We use Resources in the Name, Description and the group of the content type.
    <!-- Parent ContentType:
    Page (0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39) -->
     <ContentType
     ID="0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39007A5224C9C2804A46B028C4F78283A2CB"
     Name="$Resources:SPWorld_News,NewsContentType;"
     Group="$Resources:SPWorld_News,NewsGroup;"
     Description="$Resources:SPWorld_News,NewsContentTypeDesc;"
     Inherits="TRUE" Version="0">
     <FieldRefs>
     <FieldRef ID="{9fd593c1-75d6-4c23-8ce1-4e5de0d97545}"
     DisplayName="$Resources:SPWorld_News,NewsTitle;" Required="TRUE" Name="NewsTitle" />
     <FieldRef ID="{fcd9f32e-e2e0-4d00-8793-cfd2abf8ef4d}"
     DisplayName="$Resources:SPWorld_News,NewsBrief;" Required="FALSE" Name="NewsBrief" />
     <FieldRef ID="{FF268335-35E7-4306-B60F-E3666E5DDC07}"
     DisplayName="$Resources:SPWorld_News,NewsBody;" Required="TRUE" Name="NewsBody" />
     <FieldRef ID="{FCA0BBA0-870C-4D42-A34A-41A69749F963}"
     DisplayName="$Resources:SPWorld_News,NewsDate;" Required="TRUE" Name="NewsDate" />
     <FieldRef ID="{8218A8D9-912C-47E7-AAD2-12AA10B42BE3}"
     DisplayName="$Resources:SPWorld_News,NewsImage;" Required="FALSE" Name="NewsImage" />
     </FieldRefs>
     </ContentType>
  10. Add new Module to the PageLayouts folder. After that, we will find sample.txt file, then rename it “NewsPageLayout.aspx”.
    Adding new module to SharePoint solution.
  11. Add the code below to this “NewsPageLayout.aspx”.
     <%@ Page language="C#" Inherits="Microsoft.SharePoint.Publishing.PublishingLayoutPage,
    Microsoft.SharePoint.Publishing,Version=15.0.0.0,Culture=neutral,PublicKeyToken=71e9bce111e9429c" %>
     <%@ Register Tagprefix="SharePointWebControls" Namespace="Microsoft.SharePoint.WebControls"
     Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
     <%@ Register Tagprefix="WebPartPages" Namespace="Microsoft.SharePoint.WebPartPages"
     Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
     <%@ Register Tagprefix="PublishingWebControls" Namespace="Microsoft.SharePoint.Publishing.WebControls"
     Assembly="Microsoft.SharePoint.Publishing, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
     <%@ Register Tagprefix="PublishingNavigation" Namespace="Microsoft.SharePoint.Publishing.Navigation"
     Assembly="Microsoft.SharePoint.Publishing, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    
     <asp:Content ContentPlaceholderID="PlaceHolderPageTitle" runat="server">
     <SharePointWebControls:FieldValue id="FieldValue1" FieldName="Title" runat="server"/>
     </asp:Content>
     <asp:Content ContentPlaceholderID="PlaceHolderMain" runat="server">
    
     <H1><SharePointWebControls:TextField ID="NewsTitle"
     FieldName="9fd593c1-75d6-4c23-8ce1-4e5de0d97545" runat="server">
     </SharePointWebControls:TextField></H1>
     <p><PublishingWebControls:RichHtmlField ID="NewsBody"
     FieldName="FF268335-35E7-4306-B60F-E3666E5DDC07" runat="server">
     </PublishingWebControls:RichHtmlField></p>
     <p><SharePointWebControls:NoteField ID="NewsBrief"
     FieldName="fcd9f32e-e2e0-4d00-8793-cfd2abf8ef4d" runat="server">
     </SharePointWebControls:NoteField></p>
     <p><SharePointWebControls:DateTimeField ID="NewsDate"
     FieldName="FCA0BBA0-870C-4D42-A34A-41A69749F963" runat="server">
     </SharePointWebControls:DateTimeField></p>
     <p><PublishingWebControls:RichImageField ID="NewsImage"
     FieldName="8218A8D9-912C-47E7-AAD2-12AA10B42BE3" runat="server">
     </PublishingWebControls:RichImageField></p>
    
     </asp:Content>
  12. Add the following code to the elements file of the “NewsPageLayouts” module.
     <Module Name="NewsPageLayout" 
    Url="_catalogs/masterpage" List="116" >
     <File Path="NewsPageLayout\NewsPageLayout.aspx" Url="NewsPageLayout.aspx"
     Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 
     ReplaceContent="TRUE" Level="Published" >
     <Property Name="Title" Value="$Resources:SPWorld_News,NewsPageLayout;" />
     <Property Name="MasterPageDescription" Value="$Resources:SPWorld_News,NewsPageLayout;" />
     <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" />
     <Property Name="PublishingPreviewImage"
     Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;
     /Preview Images/WelcomeSplash.png, ~SiteCollection/_catalogs/masterpage/$Resources:
     core,Culture;/Preview Images/WelcomeSplash.png" />
     <Property Name="PublishingAssociatedContentType"
     Value=";#$Resources:SPWorld_News,NewsContentType;;
     #0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39007A5224C9C2804A46B028C4F78283A2CB;#">
     </Property>
     </File>
     </Module>
  13. Don’t forget to add the Resources folder, then add the resource file with the name “SPWorld_News.resx” as we used it in the previous steps and add the below keys to it.
    News                     News
    NewsBody                 News Body
    NewsBrief                News Brief
    NewsContentType          News Content Type
    NewsContentTypeDesc      News Content Type Desc.
    NewsDate                 News Date
    NewsGroup                News
    NewsImage                News Image
    NewsPageLayout           News Page Layout
    NewsTitle                News Title
  14. Finally, deploy the solution.
  15. The next steps will explain how we add the “news content type” to the page layout through SharePoint wizard. We will do these steps pragmatically in the next article.Note: We will do the steps from “A” to “D” pragmatically in the next article without the need to do it manually from SharePoint.

    1. Go to Site Contents then Pages , Library, Library SettingsOpening library setting of the page library
    2. Add the news content type to the page layout.
      Adding existing content type to the pages library
    3. Then
      Selecting the content type to add it to pages library
    4. Go to Pages Library, Files, New Document, select News Content Type.
      Adding new document of the news content type to pages library
    5. Write the page title.
      Creating new page of news content type to pages library.
    6. Open the page to edit it. Pages library contains new page of news content type.
    7. Now we can see the page Layout after we add the title, Body, Brief, date and image. Finally click Save the news.

NEW MS TEST MANAGER TOOL AVAILABLE!!

MTM_2010

Copy and Cloning Test Suites and Tests  OVER Team Collections

MTM Copy  can copy and cloning Test Suites and Tests over Team Projects and Team Collections.

Since 2012 Microsoft Introduce Clone feature from Microsoft Test Manager 2012, but you can only clone in the SAME Team Project ONLY . http://msdn.microsoft.com/en-us/library/hh543843.aspx

  1. Empty Team Project

1.png

  1. Open MTM Copy Tool, specify the Source and Target Team Projects, can be on Different Collections.
  2. On the mapping panel select the desire Test Plans, Test Suites and Test Cases you wish to Clone.

2.png

  1. Create Test Plan on Target Project, or select an existing Test Plan.

3.png
5.png

  1. Click “Start Migration”, and you’re done!

6.png

  1. Each Test Case that were copy is saved in Completed Items Mapping, this will prevent copy that save Test Case twice to prevent duplication’s.

 

How To : Understanding and Use the Search logic for Silverlight controls in Coded UI Test

Understanding the Search logic for Silverlight controls in Coded UI Test

 

One of the primary objectives during recording in Coded UI Test is to generate a robust search condition for a UI control to be uniquely identifiable during playback. In this post I’ll mention some of the search logic specific to the Silverlight UI Automation support within Coded UI Test introduced in the VS 2010 Feature Pack 2.

Search condition generation during Recording

 

For Silverlight control, Coded UI Test relies primarily on the Automation properties of the control. The sequence of looking for a search property in order descending of priority is

AutomationId,

Name,

LabeledBy,

HelpText,

AccessKey,

AcceleratorKey

Specific controls support additional searchable properties. For instance, Button supports  “DisplayText”, Image supports “Source”, DataGrid Cell supports “ColumnIndex” searchable property and likewise. The various search configurations mentioned here are applicable to Silverlight control search too (except for the SearchConfiguration.VisibleOnly configuration).

microsoft-silverlight[1]

For a Silverlight object hosted in IE, the search hierarchy will consist of an IE search part and Silverlight search part –

 

Top Level Window à {IE Search Hierarchy} à Silverlight Root Visual Element à Parent of Target Element à Target Element.

 

Additional hierarchy can be generated in between the Parent and Silverlight Root Visual element based on the specific control requirement. For example, certain controls such as Datagrid, Tree, TreeItem, Tab, List Item are, at almost all times, included in the search hierarchy if they are found in the ancestor hierarchy of the target element. As an example, the extended search hierarchy of a DataGrid Cell will show up as something like

 

TopLevelWindow à {IE Search Hierarchy} à Root Visual Element (Silverlight) à DataGridTable (Silverlight) à DataGridRow(Silverlight) à DataGridCell (Silverlight)

 

 

Search path during Playback

 

The overall search logic remains identical to what is followed in other UI technologies. It is a breadth first search wherein the top level window is first searched and used as a container for searching the next control in the search condition hierarchy. This is done recursively until the leaf control in the search hierarchy is found.

 

As an example, for a simple button inside a Silverlight page, the search hierarchy will be typically of the format –

 

TopLevelWindow à Document (IE BODY Tag) à Pane (IE DIV Tag) à Custom (IE OBJECT Tag) à Root Visual Element (Silverlight) à Button (Silverlight).

 

For the top-down search till the IE Object Tag, the existing search features and settings in Coded UI Test are applicable. Once the search switches to the Silverlight technology (i.e. Root Visual element of the Silverlight page), there are few limitations to the search.

microsoft-silverlight2-developer-reference[1]

 

What is missing currently in Silverlight control search?

 

  1. PlaybackSettings.ShouldSearchFailFast
  • This setting is not honored currently. However, there is some level of customization that can be done using the PlaybackSettings.SearchTimeout and the Playback.PlaybackSettings.WaitForReadyTimeout settings to tweak the timeout at which the search should abort. The later may not seem obvious, and is hence explained in more detail in a section below.

 

  1. PlaybackSettings.MatchExactHierarchy

Silverlight control search does not currently honor  MatchExactHierarchy = false. So, the search condition specified for the entire Silverlight hierarchy needs to be accurate for the search to succeed. In the above example, it is the Root Visual Element and the Button control.

 

  1. Playback.PlaybackSettings.SmartMatchOptions
  • Control level smart match is not currently supported i.e. Control

Note that Regex match is not yet supported in Coded UI Test. So the only option available is to specify the PropertyExpressionOperator.Contains condition operator in the search properties.

For example, if the button’s name is of format “Submit<SomeDynamicId>”, the search property can be defined as –

uISubmitButton.SearchProperties.Add(“Name”, “Submit”, PropertyExpressionOperator.Contains);

 

  1. There is no concept of FilterProperties as supported in Web Technology in Coded UI Test.

 

 

How to handle search failures because of slow page loading?

 

If the XAP download takes a huge amount of time to load, the search during playback would fail since the Silverlight controls will not have been rendered in the visual tree. The internal search algorithm uses a wait and retry logic to search for a control while checking the visual tree rendering status at each wait interval (this time interval is upped exponentially on each iteration).

I will not be explaining the details here, but the important thing to note is that if there is no rendering happening within a polling interval, the search will return with failure status.

This polling interval is currently set to half of the Playback.PlaybackSettings.WaitForReadyTimeout which has a default value of 60 seconds (i.e. the default polling interval is 30 seconds). So to tackle slow page loading time, you can configure this Playback.PlaybackSettings.WaitForReadyTimeout to a desired value.

Note: Playback.PlaybackSettings.WaitForReadyTimeout does affect the normal search failure time in scenarios where there is some visual rendering happening in the Silverlight page. So you would need to strike an appropriate balance based on the type of application you are testing.

Free Code to Create Cross-site Publishing Apps for SharePoint Online

Cross-site publishing is one of the powerful new capabilities in SharePoint 2013.  It enables the separation of data entry from display and breaks down the container barriers that have traditionally existed in SharePoint (ex: rolling up information across site collections). 

 IC648720[1]

Cross-site publishing is delivered through search and a number of new features, including list/library catalogs, catalog connections, and the content search web part.  Unfortunately, SharePoint Online/Office 365 doesn’t currently support these features.  Until they are added to the service (possibly in a quarterly update), customers will be looking for alternatives to close the gap.  In this post, I will outline several alternatives for delivering cross-site and search-driven content in SharePoint Online and how to template these views for reuse

I’m a huge proponent of SharePoint Online.  After visiting several Microsoft data centers, I feel confident that Microsoft is better positioned to run SharePoint infrastructure than almost any organization in the world.  SharePoint Online has very close feature parity to SharePoint on-premise, with the primary gaps existing in cross-site publishing and advanced business intelligence.  Although these capabilities have acceptable alternatives in the cloud (as will be outlined in this post), organizations looking to maximize the cloud might consider SharePoint running in IaaS for immediate access to these features.

 

Apps for SharePoint

The new SharePoint app model is fully supported in SharePoint Online and can be used to deliver customizations to SharePoint using any web technology.  New SharePoint APIs can be used with the app model to deliver an experience similar to cross-site publishing.  In fact, the content search web part could be re-written for delivery through the app model as an “App Part” for SharePoint Online. 
Although the app model provides great flexibility and reuse, it does come with some drawbacks.  Because an app part is delivered through a glorified IFRAME, it would be challenging to navigate to a new page from within the app part.  A link within the app would only navigate within the IFRAME (not the parent of the IFRAME).  Secondly, there isn’t a great mechanism for templating a site to automatically leverage an app part on its page(s).  Apps do not work with site templates, so a site that contains an app cannot be saved as a template.  Apps can be “stapled” to sites, but the app installed event (which would be needed to add the app part to a page) only fires when the app is installed into the app catalog.

REST APIs and Script Editor

The script editor web part is a powerful new tool that can help deliver flexible customization into SharePoint Online.  The script editor web part allows a block of client-side script to be added to any wiki or web part page in a site.  Combined with the new SharePoint REST APIs, the script editor web part can deliver mash-ups very similar to cross-site publishing and the content search web part.  Unlike apps for SharePoint, the script editor isn’t constrained by IFRAME containers, app permissions, or templating limitations.  In fact, a well-configured script editor web part could be exported and re-imported into the web part gallery for reuse.

Cross-site publishing leverages “catalogs” for precise querying of specific content.  Any List/Library can be designated as a catalog.  By making this designation, SharePoint will automatically create managed properties for columns of the List/Library and ultimately generate a search result source in sites that consume the catalog.  Although SharePoint Online doesn’t support catalogs, it support the building blocks such as managed properties and result sources.  These can be manually configured to provide the same precise querying in SharePoint Online and exploited in the script editor web part for display.

Calling Search REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “search/query?Querytext=’ContentType:News'”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

An easier approach might be to directly reference a list/library in the REST call of our client-side script.  This wouldn’t require manual search configuration and would provide real-time publishing (no waiting for new items to get indexed).  You could think of this approach similar to a content by query web part across site collections (possibly even farms) and the REST API makes it all possible!

List REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

The content search web part uses display templates to render search results in different arrangements (ex: list with images, image carousel, etc).  There are two types of display templates the content search web part leverages…the control template, which renders the container around the items, and the item template, which renders each individual item in the search results.  This is very similar to the way a Repeater control works in ASP.NET.  Display templates are authored using HTML, but are converted to client-side script automatically by SharePoint for rendering.  I mention this because our approach is very similar…we will leverage a container and then loop through and render items in script.  In fact, all the examples in this post were converted from display templates in a public site I’m working on. 

Item display template for content search web part

<!–#_
var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId() + “_ImageTitle_”);
var rem = index % 3;
var even = true;
if (rem == 1)
    even = false;

var pictureURL = $getItemValue(ctx, “Picture URL”);
var pictureId = encodedId + “picture”;
var pictureMarkup = Srch.ContentBySearch.getPictureMarkup(pictureURL, 140, 90, ctx.CurrentItem, “mtcImg140”, line1, pictureId);
var pictureLinkId = encodedId + “pictureLink”;
var pictureContainerId = encodedId + “pictureContainer”;
var dataContainerId = encodedId + “dataContainer”;
var dataContainerOverlayId = encodedId + “dataContainerOverlay”;
var line1LinkId = encodedId + “line1Link”;
var line1Id = encodedId + “line1”;
 _#–>
<div style=”width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;”>
   <a href=”_#= linkURL =#_”>
      <div style=”float: left; width: 140px; padding-right: 10px;”>
         <img src=”_#= pictureURL =#_” class=”mtcImg140″ style=”width: 140px;” />
      </div>
      <div style=”float: left; width: 170px”>
         <div class=”mtcProfileHeader mtcProfileHeaderP”>_#= line1 =#_</div>
      </div>
   </a>
</div>

 

Script equivalent

<div id=”divUnfeaturedNews”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //get the details for each item
                var listData = data.d.results;
                var itemCount = listData.length;
                var processedCount = 0;
                var ul = $(“<ul style=’list-style-type: none; padding-left: 0px;’ class=’cbs-List’>”);
                for (i = 0; i < listData.length; i++) {
                    $.ajax({
                        url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
                        type: “GET”,
                        headers: { “Accept”: “application/json;odata=verbose” },
                        success: function (data) {
                            processedCount++;
                            var htmlStr = “<li style=’display: inline;’><div style=’width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;’>”;
                            htmlStr += “<a href=’#’>”;
                            htmlStr += “<div style=’float: left; width: 140px; padding-right: 10px;’>”;
                            htmlStr += setImageWidth(data.d.PublishingRollupImage, ‘140’);
                            htmlStr += “</div>”;
                            htmlStr += “<div style=’float: left; width: 170px’>”;
                            htmlStr += “<div class=’mtcProfileHeader mtcProfileHeaderP’>” + data.d.Title + “</div>”;
                            htmlStr += “</div></a></div></li>”;
                            ul.append($(htmlStr))
                            if (processedCount == itemCount) {
                                $(“#divUnfeaturedNews”).append(ul);
                            }
                        },
                        error: function (data) {
                            alert(data.statusText);
                        }
                    });
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });

    function setImageWidth(imgString, width) {
        var img = $(imgString);
        img.css(‘width’, width);
        return img[0].outerHTML;
    }
</script>

 

Even one of the more complex carousel views from my site took less than 30min to convert to the script editor approach.

Advanced carousel script

<div id=”divFeaturedNews”>
    <div class=”mtc-Slideshow” id=”divSlideShow” style=”width: 610px;”>
        <div style=”width: 100%; float: left;”>
            <div id=”divSlideShowSection”>
                <div style=”width: 100%;”>
                    <div class=”mtc-SlideshowItems” id=”divSlideShowSectionContainer” style=”width: 610px; height: 275px; float: left; border-style: none; overflow: hidden; position: relative;”>
                        <div id=”divFeaturedNewsItemContainer”>
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 1&$top=4”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                var listData = data.d.results;
                for (i = 0; i < listData.length; i++) {
                    getItemDetails(listData, i, listData.length);
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });
    var processCount = 0;
    function getItemDetails(listData, i, count) {
        $.ajax({
            url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                processCount++;
                var itemHtml = “<div class=’mtcItems’ id=’divPic_” + i + “‘ style=’width: 610px; height: 275px; float: left; position: absolute; border-bottom: 1px dotted #ababab; z-index: 1; left: 0px;’>”
                itemHtml += “<div id=’container_” + i + “‘ style=’width: 610px; height: 275px; float: left;’>”;
                itemHtml += “<a href=’#’ title='” + data.d.Caption_x005f_x0020_x005f_Title + “‘ style=’width: 610px; height: 275px;’>”;
                itemHtml += data.d.Feature_x005f_x0020_x005f_Image;
                itemHtml += “</a></div></div>”;
                itemHtml += “<div class=’titleContainerClass’ id=’divTitle_” + i + “‘ data-originalidx='” + i + “‘ data-currentidx='” + i + “‘ style=’height: 25px; z-index: 2; position: absolute; background-color: rgba(255, 255, 255, 0.8); cursor: pointer; padding-right: 10px; margin: 0px; padding-left: 10px; margin-top: 4px; color: #000; font-size: 18px;’ onclick=’changeSlide(this);’>”;
                itemHtml += data.d.Caption_x005f_x0020_x005f_Title;
                itemHtml += “<span id=’currentSpan_” + i + “‘ style=’display: none; font-size: 16px;’>” + data.d.Caption_x005f_x0020_x005f_Body + “</span></div>”;
                $(‘#divFeaturedNewsItemContainer’).append(itemHtml);

                if (processCount == count) {
                    allItemsLoaded();
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    }
    window.mtc_init = function (controlDiv) {
        var slideItems = controlDiv.children;
        for (var i = 0; i < slideItems.length; i++) {
            if (i > 0) {
                slideItems[i].style.left = ‘610px’;
            }
        };
    };

    function allItemsLoaded() {
        var slideshows = document.querySelectorAll(“.mtc-SlideshowItems”);
        for (var i = 0; i < slideshows.length; i++) {
            mtc_init(slideshows[i].children[0]);
        }

        var div = $(‘#divTitle_0’);
        cssTitle(div, true);
        var top = 160;
        for (i = 1; i < 4; i++) {
            var divx = $(‘#divTitle_’ + i);
            cssTitle(divx, false);
            divx.css(‘top’, top);
            top += 35;
        }
    }

    function cssTitle(div, selected) {
        if (selected) {
            div.css(‘height’, ‘auto’);
            div.css(‘width’, ‘300px’);
            div.css(‘top’, ’10px’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’26px’);
            div.css(‘padding-top’, ‘5px’);
            div.css(‘padding-bottom’, ‘5px’);
            div.find(‘span’).css(‘display’, ‘block’);
        }
        else {
            div.css(‘height’, ’25px’);
            div.css(‘width’, ‘auto’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’18px’);
            div.css(‘padding-top’, ‘0px’);
            div.css(‘padding-bottom’, ‘0px’);
            div.find(‘span’).css(‘display’, ‘none’);
        }
    }

    window.changeSlide = function (item) {
        //get all title containers
        var listItems = document.querySelectorAll(‘.titleContainerClass’);
        var currentIndexVals = { 0: null, 1: null, 2: null, 3: null };
        var newIndexVals = { 0: null, 1: null, 2: null, 3: null };

        for (var i = 0; i < listItems.length; i++) {
            //current Index
            currentIndexVals[i] = parseInt(listItems[i].getAttribute(‘data-currentidx’));
        }

        var selectedIndex = 0; //selected Index will always be 0
        var leftOffset = ”;
        var originalSelectedIndex = ”;

        var nextSelected = ”;
        var originalNextIndex = ”;

        if (item == null) {
            var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
            originalSelectedIndex = parseInt(item0.getAttribute(‘data-originalidx’));
            originalNextIndex = originalSelectedIndex + 1;
            nextSelected = currentIndexVals[0] + 1;
        }
        else {
            nextSelected = item.getAttribute(‘data-currentidx’);
            originalNextIndex = item.getAttribute(‘data-originalidx’);
        }

        if (nextSelected == 0) { return; }

        for (i = 0; i < listItems.length; i++) {
            if (currentIndexVals[i] == selectedIndex) {
                //this is the selected item, so move to bottom and animate
                var div = $(‘[data-currentidx=”0″]’);
                cssTitle(div, false);
                div.css(‘left’, ‘-400px’);
                div.css(‘top’, ‘230px’);

                newIndexVals[i] = 3;
                var item0 = document.querySelector(‘[data-currentidx=”0″]’);
                originalSelectedIndex = item0.getAttribute(‘data-originalidx’);

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else if (currentIndexVals[i] == nextSelected) {
                //this is the NEW selected item, so resize and slide in as selected
                var div = $(‘[data-currentidx=”‘ + nextSelected + ‘”]’);
                cssTitle(div, true);
                div.css(‘left’, ‘-610px’);

                newIndexVals[i] = 0;

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else {
                //move up in queue
                var curIdx = currentIndexVals[i];
                var div = $(‘[data-currentidx=”‘ + curIdx + ‘”]’);

                var topStr = div.css(‘top’);
                var topInt = parseInt(topStr.substring(0, topStr.length – 1));

                if (curIdx != 1 && nextSelected == 1 || curIdx > nextSelected) {
                    topInt = topInt – 35;
                    if (curIdx – 1 == 2) { newIndexVals[i] = 2 };
                    if (curIdx – 1 == 1) { newIndexVals[i] = 1 };
                }

                //move up
                div.animate(
                    { top: topInt }, 500, function () {
                    });
            }
        };

        if (originalNextIndex < 0)
            originalNextIndex = itemCount – 1;

        //adjust pictures
        $(‘#divPic_’ + originalNextIndex).css(‘left’, ‘610px’);
        leftOffset = ‘-610px’;

        $(‘#divPic_’ + originalSelectedIndex).animate(
            { left: leftOffset }, 500, function () {
            });

        $(‘#divPic_’ + originalNextIndex).animate(
            { left: ‘0px’ }, 500, function () {
            });

        var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
        var item1 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[1] + ‘”]’);
        var item2 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[2] + ‘”]’);
        var item3 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[3] + ‘”]’);
        if (newIndexVals[0] != null) { item0.setAttribute(‘data-currentidx’, newIndexVals[0]) };
        if (newIndexVals[1] != null) { item1.setAttribute(‘data-currentidx’, newIndexVals[1]) };
        if (newIndexVals[2] != null) { item2.setAttribute(‘data-currentidx’, newIndexVals[2]) };
        if (newIndexVals[3] != null) { item3.setAttribute(‘data-currentidx’, newIndexVals[3]) };
    };
</script>

 

End-result of script editors in SharePoint Online

Separate authoring site collection

Final Thoughts

How To : Use the Office 365 API Client Libraries (Javascript and .Net)

blog-office365

One of the cool things with today’s Office 365 API Tooling update is that you can now access the Office 365 APIs using libraries available for .NET and JavaScript.

 

\\8These libraries make it easier to interact with the REST APIs from the device or platform of your choice. And when I say platform of your choice, it really is! Office 365 API and the client libraries support the following project types in Visual Studio today:https://sharepointsamurai.wordpress.com/wp-admin/post.php?post=1625&action=edit&message=10

  1. NET Windows Store Apps
  2. .NET Windows Store Universal Apps
  3. Windows Forms Applications
  4. WPF Applications
  5. ASP.NET MVC Web Applications
  6. ASP.NET Web Forms Applications
  7. Xamarin Android and iOS Applications
  8. Multi-device Hybrid Apps

p.s: support for more projects coming on the way….

Few Things Before We Get Started

  • The authentication library is released as “alpha”.
    • If you don’t see something you want or if you think we missed addressing some scenarios/capabilities, let us know!
    • In this initial release of the authentication library, we focused on simplifying the getting started experience, especially for Office 365 services and not so much on the interoperability across other services (that support OAuth) but that’s something we can start looking for next updates to make it more generic.
  • The library is not meant to replace Active Directory Authentication Library (ADAL) but it is a wrapper over it (where it exists) which gives you a focused getting started experience.
    • However, If you want to opt out and go “DIY”, you still can.

Setting Up Authentication

The first step to accessing Office 365 APIs via the client library is to get authenticated with Office 365.

Once you configure the required Office 365 service and its permissions, the tool will add the required client libraries for authentication and the service into your project.

Lets quickly look at what authenticating your client looks like.

Getting Authenticated

Office 365 APIs use OAuth Common Consent Framework for authentication and authorization.

Below is the code to authenticate your .NET application:

Authenticator authenticator = new Authenticator();

AuthenticationInfo authInfo =
await authenticator.AuthenticateAsync(ExchangeResourceId);

Below is the JS code snippet used for authentication in Cordova projects:

var authContext = new O365Auth.Context();
authContext.getIdToken('https://outlook.office365.com/')
.then((function (token) {
    var client = new Exchange.Client('https://outlook.office365.com/ews/odata', 
                         token.getAccessTokenFn('https://outlook.office365.com'));
    client.me.calendar.events.getEvents().fetch()
        .then(function (events) {
            // get currentPage of events and logout
            var myevents = events.currentPage;
            authContext.logOut();
        }, function (reason) {
            // handle error
        });
}).bind(this), function (reason) {
    // handle error
});

Authenticator Class

The Authenticator class initializes the key stuff required for authentication:

1) Office 365 app client Id

2) Redirect URI

3) Authentication URI

You can find these settings in:

– For Web Applications – web.config

– For Windows Store Apps – App.xaml

– For Desktop Applications (Windows Forms & WPF) – AssemblyInfo.cs/.vb

– For Xamarin Applications – AssemblyInfo.cs

If you would like to provide these values at runtime and not from the config files, you can do so by using the alternate constructor:

image

To authenticate, you call the AuthenticateAsync method by passing the service’s resource Id:

AuthenticationInfo authInfo = await authenticator.AuthenticateAsync(ExchangeResourceId);

If you are using the discovery service, you can specify the capability instead of the resource Id:

AuthenticationInfo authInfo =
await authenticator.AuthenticateAsync("Mail", ServiceIdentifierKind.Capability);

The string to use for other services if you use discovery service: Calendar, Contacts and MyFiles

NOTE:

– For now, if you want to use the discovery service, you will also need to configure a SharePoint resource, either Sites or My Files. This is because the discovery service currently uses SharePoint resource Id.

– Active Directory Graph & Sites do not support discovery service yet

Depending on your client, the AuthenticateAsync will open the appropriate window for you to authenticate:

– For web applications, you will be redirected to login page to authenticate

– For Windows Store Apps, you will get dialog box to authenticate

– For desktop apps, you will get a dialog window to authenticate

image

AuthenticatorInfo Class

Once successfully authenticated, the method returns an AuthenticatorInfo object which helps you to get the required access token:

ExchangeClient client =
new ExchangeClient(new Uri(ExchangeServiceRoot), authInfo.GetAccessToken);

And also help you re-authenticate for a different resource when you create the service client.

AuthenticationInfo graphAuthInfo =
    await authInfo.ReauthenticateAsync("https://graph.windows.net/");

The library automatically handles token lifetime management by monitoring the expiration time of the access token and performing a refresh automatically.

Thats it! – Now you can make subsequent calls to the service to return the items you want!

Authentication Library

For .NET projects:

The library is available as a Nuget package. So, if you want to add it manually to your project without the tool, you could do so. However, you will have to manually register an app in the Azure Active Directory to authenticate against AAD.

Microsoft Office 365 Authentication Library for ASP.NET

Microsoft Office 365 Authentication Library for .NET (Android and iOS)

Microsoft Office 365 Authentication Library for ASP.NET

For Cordova projects:

You will need to use the Office 365 API tool which generates the aadgraph.js under the Scripts folder that handles authentication.

A Look At : Application Management and Governance in SharePoint 2013

Summary:Learn how to govern applications for SharePoint 2013 by creating a customization policy and understanding the app model, branding, and life-cycle management.

8322.sharepoint_2D00_2010_5F00_4855E582[1]

How will you manage the applications that are developed for your environment? What customizations do you allow in your applications, and what are your processes for managing those applications?

 

For effective and manageable applications, your organization should consider the following:

  • Customization policy   SharePoint 2013 includes customizable features and capabilities that span multiple product areas, such as business intelligence, forms, workflow, and content management. Customization can introduce risks to the stability, maintenance, and security of the environment. To support customization while controlling its scope, you should develop a customization policy.
  • Life-cycle management   Follow best practices to manage applications and keep your environments in sync.
  • Branding   If you are designing an information architecture and a set of sites to use across an organization, consider including branding in your governance plan. A formal set of branding policies helps ensure that sites consistently use enterprise imagery, fonts, themes, and other design elements.
  • Solutions or apps for SharePoint?   Decide whether a solution or an app for SharePoint would be the best choice for specific customizations.

Get developer guidance about customizing and branding SharePoint 2013 on MSDN: Build sites for SharePoint 2013.

Foundation icon This article is part of a set of articles about governance. The following articles describe other aspects of governance:

The What is governance? poster gives a summary of this content. Download the PDF version or Visio version, or Zoom into the model in full detail with Zoom.it from Microsoft.

Determine the types of customizations you want to allow and how to manage them. Your customization policy should include:

  • Service-level descriptions   What are the parameters for supporting and managing customizations in your environments? See Service-level agreements.
  • Guidelines for updating customizations   How do you manage changes to customizations, and how do you roll out those changes to your environments? Consider ways to manage source code, such as a source control system and standards for documenting the code.
  • Processes for analyzing   How do you understand whether a particular customization is working well in your environment, or how do you decide which ones to create, change, or retire?
  • Approved tools for customization   Consider development standards, such as coding best practices and the tools that you will to use across your organization. For example, you should decide whether to allow the use of SharePoint Designer 2013 and Design Manager, and specify which site elements can be customized and by whom.
  • Process for piloting and testing customizations   How do you test and deploy customizations? How many people should be in a pilot testing group? What are your standards for testing and validating customizations?
  • Who is responsible for ongoing support   Who will be responsible for supporting customizations in your environments—individual teams or a central group?
  • Guidelines for packaging and deploying customizations   Do you have individual packages for each, or do you include several in a feature or solution? Which customizations should be apps for SharePoint instead of solutions? How do you ensure that customizations in one environment do not affect the rest of your SharePoint implementation?
  • Specific policies regarding each potential type of customization   What types of customizations do you allow?

    For more information about kinds of customizations and their potential risks, see the Customizations table later in this article. For more information about processes for managing customizations, see the white paper SharePoint Products and Technologies customization policy. Most of this content still applies to SharePoint 2013.

  • Policies around using the App Catalog and SharePoint Store Which apps for SharePoint do you want to make available to your organization? Can users purchase apps directly? See Solutions or apps for SharePoint? later in this article for more information.

The highly customizable design of SharePoint products enables you to provide the look, behavior, or functionality that meets your business needs. Customizations can introduce risk to your environment, whether that risk is to the environment’s performance, availability, or supportability. Conversely, a “no customizations” policy severely restricts your organization’s ability to take advantage of the SharePoint platform.

All customizations are not the same. You must decide carefully which kinds of customizations to allow in your environment. You must ensure the customizations support the performance, availability, and supportability you want for your environment. Your governance policy should balance a level of acceptable risk against the business needs for your organization.

What is considered a customization? All of the following are considered kinds of customizations in SharePoint products:

  • Configuration   Using the SharePoint user interface to configure SharePoint products.
  • Branding   Changing logos, styles, colors, master pages and page layouts, and so on to create a custom look for your SharePoint sites. See more about branding.
  • Custom code   Using developer tools to add or change functionality in SharePoint products or to interact with other applications. Risk can vary depending on kind of functionality and level of trust (full trust solutions should be rarely used; consider apps for SharePoint first).
    TipTip:
    Sandboxed solutions are deprecated in this release, so they are not the best option for custom code in the long term

Some customizations have very little risk or impact on your environment. Others have the potential for much higher risk and impact. The following table provides examples of different kinds of customizations, the risk level associated with that kind of customization, and potential issues that you might face if you allow that kind of customization.

Customizations

Risk level Types of customizations and examples Considerations or impact
Unsupported/High Unsupported customizations such as direct changes to the database schema or modifying files on the file system.
  • Will not be supported through Microsoft Customer Support.
  • Will be unable to upgrade.

Do not use.

Moderate to high Creating applications that interact with or redirect actions in key pipelines, such as events, claims, and so on.
  • Potential for service outage or performance issues.
  • Might require rework at upgrade.
Moderate to low Using a custom Web Part outside a sandbox environment, creating custom actions such as adding a menu item, or creating a custom site provisioning process.
  • Short or long-term performance issues or page errors.
  • Might require rework at upgrade.
Low Using solutions in a sandbox environment. Short-term performance issues; you can avoid some performance issues by using resource throttling and quotas.
Very low to no risk Using apps for SharePoint or using functionality within the product or configurations, such as associating a workflow with a list or using an instance of a built in Web Part. Minor configuration or page errors that would have to be addressed. Apps can be uninstalled or updated.
NoteNote:
For more information about customizations and upgrade, see Considerations for specific customizations.

 

 

Also, when you think through the customizations to allow in your environment, consider carefully whether a particular customization is necessary. If it recreates functionality that is already available in the product (such as creating a Web Part that does the same thing as the Content Editor Web Part or the Content by Query Web Part), then that might be unnecessary work.

Consider first whether the standard functionality can do what you want, or check the SharePoint Store to see if there is an app for SharePoint available that does what you need.

Follow these best practices to manage applications based on SharePoint 2013 throughout their life cycle:

  • Use separate development, preproduction, and production environments, and keep these environments as synchronized as possible so that you can accurately test your customizations.
  • Test all customizations before releasing the first time and after any updates have been made before you release them to your production environment.
  • Use source code control and solution and feature versioning to track changes to code.

Development, test, and production environments

Consistent branding with a corporate style guide makes for more cohesive-looking sites and easier development. Store approved themes in the theme gallery for consistency so that users will know when they visit the site that they are in the right place.

SharePoint 2013 includes a new feature to use for branding, Design Manager. By using Design Manager, you can create a visual design for your website with whatever web design tool or HTML editor you prefer and then upload that design into SharePoint. Design Manager is the central hub and interface where you manage all aspects of a custom design.

Creating the visual design of a site often fits into a larger process, in which multiple people or organizations are involved. For a roadmap of the tasks from a larger perspective, see Design and branding in SharePoint 2013.

SharePoint 2013 has a new development model based on apps for SharePoint. Apps for SharePoint are self-contained pieces of functionality that extend the capabilities of a SharePoint website. An app may include SharePoint features such as lists, workflows, and site pages, but it can also use a remote web application and remote data in SharePoint. An app has few or no dependencies on any other software on the device or platform where it is installed, other than what is built into the platform. Apps have no custom code that runs on the SharePoint servers.

The guidance for whether to use apps for SharePoint or SharePoint solutions is to:

  • Design apps for end users

    Apps for SharePoint:

    • Are easy for users (tenant administrators and site owners) to discover and install.
    • Use safe SharePoint extensions.
    • Provide the flexibility to develop future upgrades.
    • Can integrate with cloud-based resources.
    • Are available for both SharePoint Online and on-premises SharePoint sites.
  • Use farm solutions for administrators

    SharePoint solutions:

    • Can access the server-side object-model APIs that are needed to extend SharePoint management, configuration, and security
    • Can extend Central Administration, Windows PowerShell cmdlets, timer jobs, custom backups, and so on.
    • Are installed by administrators.
    • Can have farm, web application, or site-collection scope.

Go to MSDN to get more information about the new development model, Apps for SharePoint compared with SharePoint solutions, and Deciding between apps for SharePoint and SharePoint solutions.

Set a policy for using apps for SharePoint in your organization. Can users purchase and download apps? How do you make your organization’s apps available? How do you tell if they’re being used?

  • SharePoint Store   Determine whether users can purchase or download apps from the SharePoint Store.
  • App Catalog   Make specific apps for SharePoint available to your users by adding them to the App Catalog.
  • App requests   Configure app requests to control which apps are purchased and how many licenses are available.
  • Monitor apps   Monitor specific apps in SharePoint Server 2013 to check for errors and to track usage.

In the market

Design Pattern Automation

Software development projects are becoming bigger and more complex every day. The more complex a project the more likely the cost of developing and maintaining the software will far outweigh the hardware cost.

There’s a super-linear relationship between the size of software and the cost of developing and maintaining it. After all, large and complex software requires good engineers to develop and maintain it and good engineers are hard to come by and expensive to keep around.

 

Despite the high total cost of ownership per line of code, a lot of boilerplate code still is written, much of which could be avoided with smarter compilers. Indeed, most boilerplate code stems from repetitive implementation of design patterns. But some of these design patterns are so well-understood they could be implemented automatically if we could teach it to compilers.

Implementing the Observer pattern

Take, for instance, the Observer pattern. This design pattern was identified as early as 1995 and became the base of the successful Model-View-Controller architecture. Elements of this pattern were implemented in the first versions of Java (1995, Observable interface) and .NET (2001, INotifyPropertyChanged interface). Although the interfaces are a part of the framework, they still need to be implemented manually by developers.

The INotifyPropertyChanged interface simply contains one event named PropertyChanged, which needs to be signaled whenever a property of the object is set to a different value.

Let’s have a look at a simple example in .NET:

public Person : INotifyPropertyChanged
{

string firstName, lastName;
public event NotifyPropertyChangedEventHandler PropertyChanged;
protected void OnPropertyChanged(string propertyName)
{
if ( this.PropertyChanged != null ) {
this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}
public string FirstName
{
get { return this.firstName; }
set
{
this.firstName = value;
this.OnPropertyChanged(“FirstName”);
this.OnPropertyChanged(“FullName”);
}
public string LastName
{
get { return this.lastName; }
set {
this.lastName = value;
this.OnPropertyChanged(“LastName”);
this.OnPropertyChanged(“FullName”);
}
public string FullName { get { return string.Format( “{0} {1}“, this.firstName, this.lastName); }}}

Properties eventually depend on a set of fields, and we have to raise the PropertyChanged for a property whenever we change a field that affects it.

Shouldn’t it be possible for the compiler to do this work automatically for us? The long answer is detecting dependencies between fields and properties is a daunting task if we consider all corner cases that can happen: properties can have dependencies on fields of other objects, they can call other methods, or even worse, they can call virtual methods or delegates unknown to the compiler. So, there is no general solution to this problem, at least if we expect compilation times in seconds or minutes and not hours or days. However, in real life, a large share of properties is simple enough to be fully understood by a compiler. So the short answer is, yes, a compiler could generate notification code for more than 90% of all properties in a typical application.

In practice, the same class could be implemented as follows:

[NotifyPropertyChanged]
public Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName { get { return string.Format( “{0} {1}“, this.FirstName, this.LastName); }}
}

This code tells the compiler what to do (implement INotifyPropertyChanged) and not how to do it.

Boilerplate Code is an Anti-Pattern

The Observer (INotifyPropertyChanged) pattern is just one example of pattern that usually causes a lot of boilerplate code in large applications. But a typical source base is full of patterns generating a lot of boilerplate. Even if they are not always recognized as “official” design patterns, they are patterns because they are massively repeating among a code base. The most common causes of code repetition are:

  • Tracing, logging
  • Precondition and invariant checking
  • Authorization and audit
  • Locking and thread dispatching
  • Caching
  • Change tracking (for undo/redo)
  • Transaction handling
  • Exception handling

These features are difficult to encapsulate using normal OO techniques and hence why they’re often implemented using boilerplate code. Is that such a bad thing?

Yes.

Addressing cross-cutting concerns using boilerplate code leads to violation of fundamental principles of good software engineering

  • The Single Responsibility Principle is violated when multiple concerns are being implemented in the same method, such as Validation, Security, INotifyPropertyChanged, and Undo/Redo in a single property setter.
  • The Open/Closed Principle, which states that software entities should be open for extension, but closed for modification, is best respected when new features can be added without modifying the original source code.
  • The Don’t Repeat Yourself principle abhors code repetition coming out of manual implementation of design patterns.
  • The Loose Coupling principle is infringed when a pattern is implemented manually because it is difficult to alter the implementation of this pattern. Note that coupling can occur not only between two components, but also between a component and a conceptual design. Trading a library for another is usually easy if they share the same conceptual design, but adopting a different design requires many more modifications of source code.

Additionally, boilerplate renders your code:

  • Harder to read and reason with when trying to understand what it’s doing to address the functional requirement. This added layer of complexity has a huge bearing on the cost of maintenance considering software maintenance consists of reading code 75% of the time!
  • Larger, which means not only lower productivity, but also higher cost of developing and maintaining the software, not counting a higher risk of introducing bugs.
  • Difficult to refactor and change. Changing a boilerplate (fixing a bug perhaps) requires changing all the places where the boilerplate code had been applied. How do you even accurately identify where the boilerplate is used throughout your codebase which potentially span across many solutions and/or repositories? Find-and-replace…?

If left unchecked, boilerplate code has the nasty habit of growing around your code like vine, taking over more space each time it is applied to a new method until eventually you end up with a large codebase almost entirely covered by boilerplate code. In one of my previous teams, a simple data access layer class had over a thousand lines of code where 90% was boilerplate code to handle different types of SQL exceptions and retries.

I hope by now you see why using boilerplate code is a terrible way to implement patterns. It is actually an anti-pattern to be avoided because it leads to unnecessary complexity, bugs, expensive maintenance, loss of productivity and ultimately, higher software cost.

Design Pattern Automation and Compiler Extensions

In so many cases the struggle with making common boilerplate code reusable stems from the lack of native meta-programming support in mainstream statically typed languages such as C# and Java.

The compiler is in possession of an awful lot of information about our code normally outside our reach. Wouldn’t it be nice if we could benefit from this information and write compiler extensions to help with our design patterns?

A smarter compiler would allow for:

  1. Build-time program transformation: to allow us to add features whilst preserving the code semantics and keeping the complexity and number of lines of code in check, so we can automatically implement parts of a design pattern that can be automated;
  2. Static code validation: for build-time safety to ensure we have used the design pattern correctly or to check parts of a pattern that cannot be automated have been implemented according to a set of predefined rules.

Example: ‘using’ and ‘lock’ keywords in C#

If you want proof design patterns can be supported directly by the compiler, there is no need to look further than the using and lock keywords. At first sight, they are purely redundant in the language. But the designers of the language have recognized their importance and have created a specific keyword for them.

Let’s have a look at the using keyword. The keyword is actually a part of the larger Disposable Pattern, composed of the following participants:

  • Resources Objects are objects consuming any external resource, such as a database connection.
  • Resource Consumers are instruction block or objects that consume Resource Objects during a given lifetime.

The Disposable Pattern is ruled by the following principles:

  1. Resource Objects must implement IDisposable.
  2. Implementation of IDisposable.Dispose must be idempotent, i.e. may be safely called several times.
  3. Resource Objects must have a finalizer (called destructor in C++).
  4. Implementation of IDisposable.Dispose must call GC.SuppressFinalize.
  5. Generally, objects that store Resource Objects into their state (field) are also Resource Objects, and children Resource Objects should be disposed by the parent.
  6. Instruction blocks that allocate and consume a Resource Object should be enclosed with the using keyword (unless the reference to the resource is stored in the object state, see previous point).

As you can see, the Disposable Pattern is richer than it appears at first sight. How is this pattern being automated and enforced?

  • The core .NET library provides the IDisposable interface.
  • The C# compiler provides the using keyword, which automates generation of some source code (a try/finally block).
  • FxCop can enforce a rule that says any disposable class also implements a finalizer, and the Dispose method calls GC.SuppressFinalize.

Therefore, the Disposable Pattern is a perfect example of a design pattern directly supported by the .NET platform.

But what about patterns not intrinsically supported? They can be implemented using a combination of class libraries and compiler extensions. Our next example also comes from Microsoft.

Example: Code Contracts

Checking preconditions (and optionally postconditions and invariants) has long been recognized as a best practice to prevent defects in one component causing symptoms in another component. The idea is:

  • every component (every class, typically) should be designed as a “cell”;
  • every cell is responsible for its own health therefore;
  • every cell should check any input it receives from other cells.

Precondition checking can be considered a design pattern because it is a repeatable solution to a recurring problem.

Microsoft Code Contracts (http://msdn.microsoft.com/en-us/devlabs/dd491992.aspx) is a perfect example of design pattern automation. Based on plain-old C# or Visual Basic, it gives you an API for expressing validation rules in the form of pre-conditions, post-conditions, and object invariants. However, this API is not just a class library. It translates into build-time transformation and validation of your program.

I won’t delve into too much detail on Code Contracts; simply put, it allows you to specify validation rules in code which can be checked at build time as well as at run time. For example:

public Book GetBookById(Guid id)
{
Contract.Requires(id != Guid.Empty);
return Dal.Get<Book>(id);
}

public Author GetAuthorById(Guid id)
{
Contract.Requires(id != Guid.Empty);

return Dal.Get<Author>(id);
}

Its binary rewriter can (based on your configurations) rewrite your built assembly and inject additional code to validate the various conditions that you have specified. If you inspect the transformed code generated by the binary rewriter you will see something along the lines of:

  public Book GetBookById(Guid id)
{
if (__ContractsRuntime.insideContractEvaluation <= 4)
{
try
{
++__ContractsRuntime.insideContractEvaluation;
__ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
Guid.Empty"
);
}
finally
{
--__ContractsRuntime.insideContractEvaluation;
}
}
return Dal.Get<Program.Book>(id);
}
public Author GetAuthorById(Guid id)<
{
if (__ContractsRuntime.insideContractEvaluation <= 4)
{
try
{
++__ContractsRuntime.insideContractEvaluation;
__ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
Guid.Empty"
);
}
finally
{
--__ContractsRuntime.insideContractEvaluation;
}
}
return Dal.Get<Program.Author>(id);
}

For more information on Microsoft Code Contracts, please read Jon Skeet’s excellent InfoQ article here (http://www.infoq.com/articles/code-contracts-csharp).

Whilst compiler extensions such as Code Contracts are great, officially supported extensions usually take years to develop, mature, and stabilize. There are so many different domains, each with its own set of problems, it’s impossible for official extensions to cover them all.

What we need is a generic framework to help automate and enforce design patterns in a disciplined way so we are able to tackle domain-specific problems effectively ourselves.

Generic Framework to Automate and Enforce Design Patterns

It may be tempting to see dynamic languages, open compilers (such as Roslyn), or re-compilers (such as Cecil) as solutions because they expose the very details of abstract syntax tree. However, these technologies operate at an excessive level of abstraction, making it very complex to implement any transformation but the simplest ones.

What we need is a high-level framework for compiler extension, based on the following principles:

1. Provide a set of transformation primitives, for instance:

  • intercepting method calls;
  • executing code before and after method execution;
  • intercepting access to fields, properties, or events;
  • introducing interfaces, methods, properties, or events to an existing class.

2. Provide a way to express where primitives should be applied: it’s good to tell the complier extension you want to intercept some methods, but it’s even better if we know which methods should be intercepted!

3. Primitives must be safely composable

It’s natural to want to be able to apply multiple transformations to the same location(s) in our code, so the framework should give us the ability to compose transformations.

When you’re able to apply multiple transformations simultaneously some transformations might need to occur in a specific order in relation to others. Therefore the ordering of transformations needs to follow a well-defined convention but still allow us to override the default ordering where appropriate.

4. Semantics of enhanced code should not be affected

The transformation mechanism should be unobtrusive and leave the original code unaltered as much as possible whilst at the same time providing capabilities to validate the transformations statically. The framework should not make it too easy to “break” the intent of the source code.

5. Advanced reflection and validation abilities

By definition, a design pattern contains rules defining how it should be implemented. For instance, a locking design pattern may define instance fields can only be accessed from instance methods of the same object. The framework must offer a mechanism to query methods accessing a given field, and a way to emit clean build-time errors.

Aspect-Oriented Programming

Aspect-Oriented Programming (AOP) is a programming paradigm that aims to increase modularity by allowing the separation of concerns.

An aspect is a special kind of class containing code transformations (called advices), code matching rules (barbarically called pointcuts), and code validation rules. Design patterns are typically implemented by one or several aspects. There are several ways to apply aspects to code, which greatly depend on each AOP framework. Custom attributes (annotations in Java) are a convenient way to add aspects to hand-picked elements of code More complex pointcuts can be expressed declaratively using XML (e.g. Microsoft Policy Injection Application Block) or a Domain-Specific Language (e.g. AspectJ or Spring), or programmatically using reflection (e.g. LINQ over System.Reflection with PostSharp).

The weaving process combines advice with the original source code at the specified locations (not less barbarically called joinpoints). It has access to meta-data about the original source code so, for compiled languages such as C# or Java, there is opportunity for the static weaver to perform static analysis to ensure the validity of the advice in relation to the pointcuts where they are applied.

Although aspect-oriented programming and design patterns have been independently conceptualized, AOP is an excellent solution to those who seek to automate design patterns or enforce design rules. Unlike low-level metaprogramming, AOP has been designed according to the principles cited above so anyone, and not only compiler specialists, can implement design patterns.

AOP is a programming paradigm and not a technology. As such, it can be implemented using different approaches. AspectJ, the leading AOP framework for Java, is now implemented directly in the Eclipse Java compiler. In .NET, where compilers are not open-source, AOP is best implemented as a re-compiler, transforming the output of the C# or Visual Basic compiler. The leading tool in .NET is PostSharp (see below). Alternatively, a limited subset of AOP can be achieved using dynamic proxies and service containers, and most dependency injection frameworks are able to offer at least method interception aspects.

Example: Custom Design Patterns with PostSharp

PostSharp is a development tool for the automation and enforcement of design patterns in Microsoft .NET and features the most complete AOP framework for .NET.

To avoid turning this article into a PostSharp tutorial, let’s take a very simple pattern: dispatching of method execution back and forth between a foreground (UI) thread and a background thread. This pattern can be implemented using two simple aspects: one that dispatches a method to the background thread, and another that dispatches it to the foreground thread. Both aspects can be compiled by the free PostSharp Express. Let’s look at the first aspect: BackgroundThreadAttribute.

The generative part of the pattern is simple: we just need to create a Task that executes that method, and schedule execution of that Task.

[Serializable] 
public sealed class BackgroundThreadAttribute : MethodInterceptionAspect
{
    public override void OnInvoke(MethodInterceptionArgs args)
{
Task.Run( args.Proceed );
}
}

The MethodInterceptionArgs class contains information about the context in which the method is invoked, such as the arguments and the return value. With this information, you will be able to invoke the original method, cache its return value, log its input arguments, or just about anything that’s required for your use case.

For the validation part of the pattern, we would like to avoid having the custom attribute applied to methods that have a return value or a parameter passed by reference. If this happens, we would like to emit a build-time error. Therefore, we have to implement the CompileTimeValidate method in our BackgroundThreadAttribute class:

// Check that the method returns 'void', has no out/ref argument.
public override bool CompileTimeValidate( MethodBase method ) {

MethodInfo methodInfo = (MethodInfo) method;

if ( methodInfo.ReturnType != typeof(void) ||
methodInfo.GetParameters().Any( p => p.ParameterType.IsByRef ) )
{
ThreadingMessageSource.Instance.Write( method, SeverityType.Error, "THR006",
method.DeclaringType.Name, method.Name );
return false;
}

return true;
}

The ForegoundThreadAttribute would look similar, using the Dispatcher object in WPF or the BeginInvoke method in WinForms.

The above aspect can be applied just like any other attributes, for example:

[BackgroundThread]
private static void ReadFile(string fileName)
{
DisplayText( File.ReadAll(fileName) );
}
[ForegroundThread] private void DisplayText( string content ) { this.textBox.Text = content;
}

The resulting source code is much cleaner than what we would get by directly using tasks and dispatchers.

One may argue that C# 5.0 addresses the issue better with the async and await keywords. This is correct, and is a good example of the C# team identifying a recurring problem that they decided to address with a design pattern implemented directly in the compiler and in core class libraries. While the .NET developer community had to wait until 2012 for this solution, PostSharp offered one as early as 2006.

How long must the .NET community wait for solutions to other common design patterns, for instance INotifyPropertyChanged? And what about design patterns that are specific to your company’s application framework?

Smarter compilers would allow you to implement your own design patterns, so you would not have to rely on the compiler vendor to improve the productivity of your team.

Downsides of AOP

I hope by now you are convinced that AOP is a viable solution to automate design patterns and enforce good design, but it’s worth bearing in mind that there are several downsides too:

1. Lack of staff preparation

As a paradigm, AOP is not taught in undergraduate programs, and it’s rarely touched at master level. This lack of education has contributed towards a lack of general awareness about AOP amongst the developer community.

Despite being 20 years old, AOP is misperceived as a ‘new’ paradigm which often proves to be the stumbling block for adoption for all but the most adventurous development teams.

Design patterns are almost the same age, but the idea that design patterns can be automated and validated is recent. We cited some meaningful precedencies in this article involving the C# compiler, the .NET class library, and Visual Studio Code Analysis (FxCop), but these precedencies have not been generalized into a general call for design pattern automation.

2. Surprise factor

Because staffs and students alike are not well prepared, there can be an element of surprise when they encounter AOP because the application has additional behaviors that are not directly visible from source code. Note: what is surprising is the intended effect of AOP, that the compiler is doing more than usual, and not any side effect.

There can also be some surprise of an unintended effect, when a bug in the use of an aspect (or in a pointcut) causes the transformation to be applied to unexpected classes and methods. Debugging such errors can be subtle, especially if the developer is not aware that aspects are being applied to the project.

These surprise factors can be addressed by:

  • IDE integration, which helps to visualize (a) which additional features have been applied to the source displayed in the editor and (b) to which elements of code a given aspect has been applied. At time of writing only two AOP frameworks provide correct IDE integration: AspectJ (with the AJDT plug-in for Eclipse) and PostSharp (for Visual Studio).
  • Unit testing by the developer – aspects, as well as the fact that aspects have been applied properly, must be unit tested as any other source code artifact.
  • Not relying on naming conventions when applying aspects to code, but instead relying on structural properties of the code such as type inheritance or custom attributes. Note that this debate is not unique to AOP: convention-based programming has been recently gaining momentum, although it is also subject to surprises.

3. Politics

Use of design pattern automation is generally a politically sensitive issue because it also addresses separation of concerns within a team. Typically, senior developers will select design patterns and implement aspects, and junior developers will use them. Senior developers will write validation rules to ensure hand-written code respects the architecture. The fact that junior developers don’t need to understand the whole code base is actually the intended effect.

This argument is typically delicate to tackle because it takes the point of view of a senior manager, and may injure the pride of junior developers.

Ready-Made Design Pattern Implementation with PostSharp Pattern Libraries

As we’ve seen with the Disposable Pattern, even seemingly simple design patterns can actually require complex code transformation or validation. Some of these transformations and validations are complex but still possible to implement automatically. Others can be too complex for automatic processing and must be done manually.

Fortunately, there are also simple design patterns that can be automated easily by anyone (exception handling, transaction handling, and security) with an AOP framework.

After many years of market experience, the PostSharp team began to provide highly sophisticated and optimized ready-made implementations of the most common design patterns after they realized most customers were implementing the same aspects over and over again.

PostSharp currently provides ready-made implementations for the following design patterns:

  • Multithreading: reader-writer-synchronized threading model, actor threading model, thread-exclusive threading model, thread dispatching;
  • Diagnostics: high-performance and detailed logging to a variety of back-ends including NLog and Log4Net;
  • INotifyPropertyChanged: including support for composite properties and dependencies on other objects;
  • Contracts: validation of parameters, fields, and properties.

Now, with ready-made implementations of design patterns, teams can start enjoying the benefits of AOP without learning AOP.

Summary

So-called high-level languages such as Java and C# still force developers to write code at an irrelevant level of abstraction. Because of the limitations of mainstream compilers, developers are forced to write a lot of boilerplate code, adding to the cost of developing and maintaining applications. Boilerplate stems from massive implementation of patterns by hand, in what may be the largest use of copy-paste inheritance in the industry.

The inability to automate design pattern implementation probably costs billions to the software industry, not even counting the opportunity cost of having qualified software engineers spending their time on infrastructure issues instead of adding business value.

However, a large amount of boilerplate could be removed if we had smarter compilers to allow us to automate implementation of the most common patterns. Hopefully, future language designers will understand design patterns are first-class citizens of modern application development, and should have appropriate support in the compiler.

But actually, there is no need to wait for new compilers. They already exist, and are mature. Aspect-oriented programming was specifically designed to address the issue of boilerplate code. Both AspectJ and PostSharp are mature implementations of these concepts, and are used by the largest companies in the world. And both PostSharp and Spring Roo provide ready-made implementations of the most common patterns. As always, early adopters can get productivity gains several years before the masses follow.