Category Archives: Design Pattern

Free Code to Create Cross-site Publishing Apps for SharePoint Online

Cross-site publishing is one of the powerful new capabilities in SharePoint 2013.  It enables the separation of data entry from display and breaks down the container barriers that have traditionally existed in SharePoint (ex: rolling up information across site collections). 

 IC648720[1]

Cross-site publishing is delivered through search and a number of new features, including list/library catalogs, catalog connections, and the content search web part.  Unfortunately, SharePoint Online/Office 365 doesn’t currently support these features.  Until they are added to the service (possibly in a quarterly update), customers will be looking for alternatives to close the gap.  In this post, I will outline several alternatives for delivering cross-site and search-driven content in SharePoint Online and how to template these views for reuse

I’m a huge proponent of SharePoint Online.  After visiting several Microsoft data centers, I feel confident that Microsoft is better positioned to run SharePoint infrastructure than almost any organization in the world.  SharePoint Online has very close feature parity to SharePoint on-premise, with the primary gaps existing in cross-site publishing and advanced business intelligence.  Although these capabilities have acceptable alternatives in the cloud (as will be outlined in this post), organizations looking to maximize the cloud might consider SharePoint running in IaaS for immediate access to these features.

 

Apps for SharePoint

The new SharePoint app model is fully supported in SharePoint Online and can be used to deliver customizations to SharePoint using any web technology.  New SharePoint APIs can be used with the app model to deliver an experience similar to cross-site publishing.  In fact, the content search web part could be re-written for delivery through the app model as an “App Part” for SharePoint Online. 
Although the app model provides great flexibility and reuse, it does come with some drawbacks.  Because an app part is delivered through a glorified IFRAME, it would be challenging to navigate to a new page from within the app part.  A link within the app would only navigate within the IFRAME (not the parent of the IFRAME).  Secondly, there isn’t a great mechanism for templating a site to automatically leverage an app part on its page(s).  Apps do not work with site templates, so a site that contains an app cannot be saved as a template.  Apps can be “stapled” to sites, but the app installed event (which would be needed to add the app part to a page) only fires when the app is installed into the app catalog.

REST APIs and Script Editor

The script editor web part is a powerful new tool that can help deliver flexible customization into SharePoint Online.  The script editor web part allows a block of client-side script to be added to any wiki or web part page in a site.  Combined with the new SharePoint REST APIs, the script editor web part can deliver mash-ups very similar to cross-site publishing and the content search web part.  Unlike apps for SharePoint, the script editor isn’t constrained by IFRAME containers, app permissions, or templating limitations.  In fact, a well-configured script editor web part could be exported and re-imported into the web part gallery for reuse.

Cross-site publishing leverages “catalogs” for precise querying of specific content.  Any List/Library can be designated as a catalog.  By making this designation, SharePoint will automatically create managed properties for columns of the List/Library and ultimately generate a search result source in sites that consume the catalog.  Although SharePoint Online doesn’t support catalogs, it support the building blocks such as managed properties and result sources.  These can be manually configured to provide the same precise querying in SharePoint Online and exploited in the script editor web part for display.

Calling Search REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “search/query?Querytext=’ContentType:News'”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

An easier approach might be to directly reference a list/library in the REST call of our client-side script.  This wouldn’t require manual search configuration and would provide real-time publishing (no waiting for new items to get indexed).  You could think of this approach similar to a content by query web part across site collections (possibly even farms) and the REST API makes it all possible!

List REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

The content search web part uses display templates to render search results in different arrangements (ex: list with images, image carousel, etc).  There are two types of display templates the content search web part leverages…the control template, which renders the container around the items, and the item template, which renders each individual item in the search results.  This is very similar to the way a Repeater control works in ASP.NET.  Display templates are authored using HTML, but are converted to client-side script automatically by SharePoint for rendering.  I mention this because our approach is very similar…we will leverage a container and then loop through and render items in script.  In fact, all the examples in this post were converted from display templates in a public site I’m working on. 

Item display template for content search web part

<!–#_
var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId() + “_ImageTitle_”);
var rem = index % 3;
var even = true;
if (rem == 1)
    even = false;

var pictureURL = $getItemValue(ctx, “Picture URL”);
var pictureId = encodedId + “picture”;
var pictureMarkup = Srch.ContentBySearch.getPictureMarkup(pictureURL, 140, 90, ctx.CurrentItem, “mtcImg140”, line1, pictureId);
var pictureLinkId = encodedId + “pictureLink”;
var pictureContainerId = encodedId + “pictureContainer”;
var dataContainerId = encodedId + “dataContainer”;
var dataContainerOverlayId = encodedId + “dataContainerOverlay”;
var line1LinkId = encodedId + “line1Link”;
var line1Id = encodedId + “line1”;
 _#–>
<div style=”width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;”>
   <a href=”_#= linkURL =#_”>
      <div style=”float: left; width: 140px; padding-right: 10px;”>
         <img src=”_#= pictureURL =#_” class=”mtcImg140″ style=”width: 140px;” />
      </div>
      <div style=”float: left; width: 170px”>
         <div class=”mtcProfileHeader mtcProfileHeaderP”>_#= line1 =#_</div>
      </div>
   </a>
</div>

 

Script equivalent

<div id=”divUnfeaturedNews”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //get the details for each item
                var listData = data.d.results;
                var itemCount = listData.length;
                var processedCount = 0;
                var ul = $(“<ul style=’list-style-type: none; padding-left: 0px;’ class=’cbs-List’>”);
                for (i = 0; i < listData.length; i++) {
                    $.ajax({
                        url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
                        type: “GET”,
                        headers: { “Accept”: “application/json;odata=verbose” },
                        success: function (data) {
                            processedCount++;
                            var htmlStr = “<li style=’display: inline;’><div style=’width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;’>”;
                            htmlStr += “<a href=’#’>”;
                            htmlStr += “<div style=’float: left; width: 140px; padding-right: 10px;’>”;
                            htmlStr += setImageWidth(data.d.PublishingRollupImage, ‘140’);
                            htmlStr += “</div>”;
                            htmlStr += “<div style=’float: left; width: 170px’>”;
                            htmlStr += “<div class=’mtcProfileHeader mtcProfileHeaderP’>” + data.d.Title + “</div>”;
                            htmlStr += “</div></a></div></li>”;
                            ul.append($(htmlStr))
                            if (processedCount == itemCount) {
                                $(“#divUnfeaturedNews”).append(ul);
                            }
                        },
                        error: function (data) {
                            alert(data.statusText);
                        }
                    });
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });

    function setImageWidth(imgString, width) {
        var img = $(imgString);
        img.css(‘width’, width);
        return img[0].outerHTML;
    }
</script>

 

Even one of the more complex carousel views from my site took less than 30min to convert to the script editor approach.

Advanced carousel script

<div id=”divFeaturedNews”>
    <div class=”mtc-Slideshow” id=”divSlideShow” style=”width: 610px;”>
        <div style=”width: 100%; float: left;”>
            <div id=”divSlideShowSection”>
                <div style=”width: 100%;”>
                    <div class=”mtc-SlideshowItems” id=”divSlideShowSectionContainer” style=”width: 610px; height: 275px; float: left; border-style: none; overflow: hidden; position: relative;”>
                        <div id=”divFeaturedNewsItemContainer”>
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 1&$top=4”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                var listData = data.d.results;
                for (i = 0; i < listData.length; i++) {
                    getItemDetails(listData, i, listData.length);
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });
    var processCount = 0;
    function getItemDetails(listData, i, count) {
        $.ajax({
            url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                processCount++;
                var itemHtml = “<div class=’mtcItems’ id=’divPic_” + i + “‘ style=’width: 610px; height: 275px; float: left; position: absolute; border-bottom: 1px dotted #ababab; z-index: 1; left: 0px;’>”
                itemHtml += “<div id=’container_” + i + “‘ style=’width: 610px; height: 275px; float: left;’>”;
                itemHtml += “<a href=’#’ title='” + data.d.Caption_x005f_x0020_x005f_Title + “‘ style=’width: 610px; height: 275px;’>”;
                itemHtml += data.d.Feature_x005f_x0020_x005f_Image;
                itemHtml += “</a></div></div>”;
                itemHtml += “<div class=’titleContainerClass’ id=’divTitle_” + i + “‘ data-originalidx='” + i + “‘ data-currentidx='” + i + “‘ style=’height: 25px; z-index: 2; position: absolute; background-color: rgba(255, 255, 255, 0.8); cursor: pointer; padding-right: 10px; margin: 0px; padding-left: 10px; margin-top: 4px; color: #000; font-size: 18px;’ onclick=’changeSlide(this);’>”;
                itemHtml += data.d.Caption_x005f_x0020_x005f_Title;
                itemHtml += “<span id=’currentSpan_” + i + “‘ style=’display: none; font-size: 16px;’>” + data.d.Caption_x005f_x0020_x005f_Body + “</span></div>”;
                $(‘#divFeaturedNewsItemContainer’).append(itemHtml);

                if (processCount == count) {
                    allItemsLoaded();
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    }
    window.mtc_init = function (controlDiv) {
        var slideItems = controlDiv.children;
        for (var i = 0; i < slideItems.length; i++) {
            if (i > 0) {
                slideItems[i].style.left = ‘610px’;
            }
        };
    };

    function allItemsLoaded() {
        var slideshows = document.querySelectorAll(“.mtc-SlideshowItems”);
        for (var i = 0; i < slideshows.length; i++) {
            mtc_init(slideshows[i].children[0]);
        }

        var div = $(‘#divTitle_0’);
        cssTitle(div, true);
        var top = 160;
        for (i = 1; i < 4; i++) {
            var divx = $(‘#divTitle_’ + i);
            cssTitle(divx, false);
            divx.css(‘top’, top);
            top += 35;
        }
    }

    function cssTitle(div, selected) {
        if (selected) {
            div.css(‘height’, ‘auto’);
            div.css(‘width’, ‘300px’);
            div.css(‘top’, ’10px’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’26px’);
            div.css(‘padding-top’, ‘5px’);
            div.css(‘padding-bottom’, ‘5px’);
            div.find(‘span’).css(‘display’, ‘block’);
        }
        else {
            div.css(‘height’, ’25px’);
            div.css(‘width’, ‘auto’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’18px’);
            div.css(‘padding-top’, ‘0px’);
            div.css(‘padding-bottom’, ‘0px’);
            div.find(‘span’).css(‘display’, ‘none’);
        }
    }

    window.changeSlide = function (item) {
        //get all title containers
        var listItems = document.querySelectorAll(‘.titleContainerClass’);
        var currentIndexVals = { 0: null, 1: null, 2: null, 3: null };
        var newIndexVals = { 0: null, 1: null, 2: null, 3: null };

        for (var i = 0; i < listItems.length; i++) {
            //current Index
            currentIndexVals[i] = parseInt(listItems[i].getAttribute(‘data-currentidx’));
        }

        var selectedIndex = 0; //selected Index will always be 0
        var leftOffset = ”;
        var originalSelectedIndex = ”;

        var nextSelected = ”;
        var originalNextIndex = ”;

        if (item == null) {
            var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
            originalSelectedIndex = parseInt(item0.getAttribute(‘data-originalidx’));
            originalNextIndex = originalSelectedIndex + 1;
            nextSelected = currentIndexVals[0] + 1;
        }
        else {
            nextSelected = item.getAttribute(‘data-currentidx’);
            originalNextIndex = item.getAttribute(‘data-originalidx’);
        }

        if (nextSelected == 0) { return; }

        for (i = 0; i < listItems.length; i++) {
            if (currentIndexVals[i] == selectedIndex) {
                //this is the selected item, so move to bottom and animate
                var div = $(‘[data-currentidx=”0″]’);
                cssTitle(div, false);
                div.css(‘left’, ‘-400px’);
                div.css(‘top’, ‘230px’);

                newIndexVals[i] = 3;
                var item0 = document.querySelector(‘[data-currentidx=”0″]’);
                originalSelectedIndex = item0.getAttribute(‘data-originalidx’);

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else if (currentIndexVals[i] == nextSelected) {
                //this is the NEW selected item, so resize and slide in as selected
                var div = $(‘[data-currentidx=”‘ + nextSelected + ‘”]’);
                cssTitle(div, true);
                div.css(‘left’, ‘-610px’);

                newIndexVals[i] = 0;

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else {
                //move up in queue
                var curIdx = currentIndexVals[i];
                var div = $(‘[data-currentidx=”‘ + curIdx + ‘”]’);

                var topStr = div.css(‘top’);
                var topInt = parseInt(topStr.substring(0, topStr.length – 1));

                if (curIdx != 1 && nextSelected == 1 || curIdx > nextSelected) {
                    topInt = topInt – 35;
                    if (curIdx – 1 == 2) { newIndexVals[i] = 2 };
                    if (curIdx – 1 == 1) { newIndexVals[i] = 1 };
                }

                //move up
                div.animate(
                    { top: topInt }, 500, function () {
                    });
            }
        };

        if (originalNextIndex < 0)
            originalNextIndex = itemCount – 1;

        //adjust pictures
        $(‘#divPic_’ + originalNextIndex).css(‘left’, ‘610px’);
        leftOffset = ‘-610px’;

        $(‘#divPic_’ + originalSelectedIndex).animate(
            { left: leftOffset }, 500, function () {
            });

        $(‘#divPic_’ + originalNextIndex).animate(
            { left: ‘0px’ }, 500, function () {
            });

        var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
        var item1 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[1] + ‘”]’);
        var item2 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[2] + ‘”]’);
        var item3 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[3] + ‘”]’);
        if (newIndexVals[0] != null) { item0.setAttribute(‘data-currentidx’, newIndexVals[0]) };
        if (newIndexVals[1] != null) { item1.setAttribute(‘data-currentidx’, newIndexVals[1]) };
        if (newIndexVals[2] != null) { item2.setAttribute(‘data-currentidx’, newIndexVals[2]) };
        if (newIndexVals[3] != null) { item3.setAttribute(‘data-currentidx’, newIndexVals[3]) };
    };
</script>

 

End-result of script editors in SharePoint Online

Separate authoring site collection

Final Thoughts

How To : Use Git Tools for TFS Integration

Git – TFS Integration – Why it matters

 taking-your-version-control-to-a-next-level-with-tfs-and-git-1-638[1]

For many small development shops, the idea of using TFS and a centralized source control repository is anathema. The mere thought of being restricted by a software configuration manager on how and when to branch or merge cuts against everything they cherish in software development.

 

Git is their natural and chosen ground for managing source code. The freedom and flexibility of using Git enables them to work where they are. This is especially true if they are working as part of a distributed team on modular projects.

Microsoft addressed many of the existing concerns with TFS source control with the advent of TFS 2012 and local workspaces. However, even though local workspaces enable great flexibility in offline work, they are still ultimately tied to a central repository and the policies and restrictions imposed on it.

 

Enter Git support in TFS. Git support currently comes in two forms; stand alone Git support in Visual Studio and Git support with TFS.

Git support with Visual Studio is completely straightforward. Simply change the source control plug-in selection to the Microsoft Git Provider and all the power and flexibility of Git is available to the Visual Studio developer such as private branches and online collaboration with Git hosts such as GitHub and BitBucket.

Source

Configuring Git for Visual Studio Source Control

However, from an ALM perspective, the real power and the compelling feature of Microsoft’s integration with Git is the ability to work with TFS.

 

Developers still get all the advantages and flexibility of Git, but can also take advantage of the ALM features of TFS such as work item tracking, team tools and integrated build. The Git – TFS integration gets us much closer to the ultimate goal of true cross-platform support in a single ALM toolset.

 

The TFS – Git integration can be utilized a couple of ways. The first option is the ability to essentially synchronize a Git repository with TFS source control with the Git-TF utility. This utility makes it easy to clone sources from TFS, fetch updates from TFS and push changes back to TFS.

 

What’s more, it fully supports TFS shelvesets and work item integration, which presents some exciting possibilities. The features and functionality Git-TF provides makes it a compelling solution and a credible compromise between centrally managed teams with source control and distributed teams with distributed source control.

The second option, available now only through Microsoft’s hosted TFS Service, is the ability for organizations to create TFS Team Projects with Git hosted source control (this ability is reportedly planned for on premise TFS support in the next release). This is a fairly exciting development.

 

Having the choice between native TFS version control and Git when creating a team project opens many doors that hitherto were locked shut.

remoteRepo1

XCode IDE connected to a TFS hosted repository

Eclipse, XCode, Visual Studio and any other IDE that supports Git can now be used to leverage the powerful ALM features TFS provides.

As an ALM consultant, that’s the part that excites me the most. Hosting all development efforts in a single environment; an environment that supports all the various technologies in play and being able to track and manage those efforts with agility and transparency is a huge benefit to any organization that provides multiple platform solutions.

 

Even those who don’t, will now have the option to at least evaluate the feasibility of utilizing TFS in development environments not typically associated with a Microsoft project.

The mythical promised land of cross-platform ALM may have just become quite less mythical.

 

———————Microsoft’s Tool for Git and TFS Integration – ———————————————————————–

 http://gittf.codeplex.com

2352.vs_5F00_heart_5F00_git_5F00_thumb_5F00_66558A9C[1]

Working with Teams

The Git-TF tool is most easily used by a single developer or multiple developers working independently with their own isolated Git repos. That is, each developer uses Git-TF to clone a local repo where they can then use Git to manage their local development that will eventually be checked in to TFS. In this “hub and spoke” configuration, all code is shared through TFS at the “hub” and each developer using Git becomes a “spoke”. Developers looking to collaborate using Git’s distributed sharing capabilities will want to work in a specific configuration described below.

Most often, developers collaborating with Git have cloned from a common repo. When it comes time to share divergent changes, conflict resolution is easy because each repository shares the same common base version. Many times, conflicts are automatically resolved. One of the keys to this merging of histories is that each commit is assigned a unique identifier that is generated by the contents of the commit. When working with Git-TF, two repositories cloned from the same TFS path will not have the same commit IDs unless the clones were done at the same point in TFS history, and with the same depth. In the event that two Git repos that were independently cloned using Git-TF share changes directly, the result will be a baseless merge of the repositories and a large number of conflicts. For this reason, it is not recommended that teams using Git-TF ever share changes directly through Git (i.e. using git push and git pull).

Instead, it is recommended that a team working with Git-TF and collaborating with Git do so by designating a single repo as the point of contact with TFS. This configuration may look as follows for a team of three developers:

          [TFS]      [Shared Git repo]
            |         ^ (2)  |       \
            |        /       |        \
            |       /        |         \
            V (1)  /         V (3)      V (4)
       [Alice's Repo]   [Bob's Repo]   [Charlie's Repo]
 

In the configuration above the actions would be as follows:

  1. Using the git tf clone command, Alice clones a path from TFS into a local Git repo.
  2. Next, Alice uses git push to push the commit created in her local Git repo into the team’s shared Git repo.
  3. Bob can then use git clone to clone down the changes that Alice pushed.
  4. Charlie can also use git clone to clone down the changes that Alice pushed.

Both Bob and Charlie only ever interact with the team’s shared Git repo using git push and git pull. They can also interact directly with one another’s repos (or with Alice’s) , but should never use Git-TF commands to interact with TFVC.

When working with the team, Alice will typically develop locally and use git push and git pull to share changes with the team. When the team decides they have changes to share with TFS, Alice will use a git tf checkin to share those changes (typically a git tf checkin –shallow will be used). Likewise, if there are changes that the team needs from TFVC, Alice will perform a git tf pull, using the –merge or –rebase options as appropriate, and then use git push to share the changes with the team.

Note that (until Issue 77 is addressed) all changes coming into the TFVC repository will come in as if from Alice’s TFS identity. This is fine if only Alice has an identity on that TFVC project but it may well not be what you want if Bob and Charlie also had valid identities in that TFS project.

Rebase vs. Merge

Once changes have been fetched from TFS using git tf pull (or git tf fetch), those changes must either be merged with the HEAD or have any changes since the last fetch rebased on top of FETCH_HEAD. Git-TF allows developers to work in either manner, though if the repo that is sharing changes with TFS has shared any commits with other Git users, then this rebase may result in significant conflicts (see The Perils of Rebasing). For this reason, it is recommended that any team working in the aforementioned team configuration use git tf pull with the default –merge option (or use git merge FETCH_HEAD to incorporate changes made in TFS after fetching manually).

Recommended Git Settings

When using the Git-TF tools, there are a few recommended settings that should make it easier to work with other developers that are using TFS.

Line Endings

core.autocrlf = false

Git has a feature to allow line endings to be normalized for a repository, and it provides options for how those line endings should be set when files are checked out. TFS does not have any feature to normalize line endings – it stores exactly what is checked in by the user. When using Git-TF, choosing to normalize line endings to Unix-style line endings (LF) will likely result in TFS users (especially those using VS) changing the line endings back to Windows-style line endings (CRLF). As a result, it is recommended to set the core.autocrlf option to false, which will keep line endings unchanged in the Git repo.

Ignore case

core.ignorecase = true

TFS does not allow multiple files that differ only in case to exist in the same folder at the same time. Git users working on non-Windows machines could commit files to their repo that differ only in case, and attempting to check in those changes to TFS will result in an error. To avoid these types of errors, the core.ignorecase option should be set to true.

How to connect a SharePoint 2013 Document Library to Outlook 2013

 

How to connect a SharePoint 2013 Document Library to Outlook 2013One of the key methods of gaining User Adoption of SharePoint is ensuring and pushing the integration it has with Microsoft Office to information workers. After all, information workers generally use Outlook as their ‘mother-ship’. Getting those users to switch immediately to SharePoint or, asking them to visit a document library in a SharePoint site which they will need to access could take time, especially since it means opening a browser, navigating to the site, covering their beloved Outlook client in the process.

 

The following describes how to connect a typical SharePoint 2013 document library to Outlook 2013 client.

  1. Access your SharePoint site; go into the relevant Documents library. In the below example, I clicked on the default Team Site Documents repository link in the Quick Launch bar, which has around 140 documents.

 

  1. Ok, that’s the Document library displayed, now to get to the Library Tab on the Ribbon bar; the option we are looking for is within the Library options available there.

  1. When the Library ribbon is displayed, click the Connect To Outlook button in the Connect & Export section. Note. If Connect to Outlook is greyed out ensure that Outlook 2013 is fully operational. I’ve come across examples where Outlook is installed, but no email account has been enabled in Outlook – if that’s the case this button will be greyed out.

  1. Once the Connect To Outlook button is clicked, you may receive a warning message informing you that you allow SharePoint to connect with Outlook – Click ALLOW.

  1. Outlook 2013 will be displayed. A dialog will also then be displayed that asks you to confirm that you wish to connect the Document Library to Outlook. The below dialog shows the Site Name and Document Library title, along with the URL of the document library being connected. Below, and to the right is a button that shows more information about the connection (ADVANCED button). The following screenshot shows the information displayed if the Advanced Button is clicked. There is not much you can do on that screen, for now, click YES to confirm the connection.

Here’s an example of the Advanced dialog associated. The most interesting aspect is the Permissions line. For Document Library connection to Outlook, this will be set as READ. This is by design, and for good reason. Things like classified metadata are not exposed to be writeable from Outlook including other document library settings like CheckIn/Out etc. However, this does not prevent you from modifying a file in the resultant list. If the document needs to be updated, simply double-click on the document which will open it in the local application, click the edit offline option, make your changes, click save, click close, and then a prompt should appear to allow you to update to the server.

Once completed, the documents will be listed in Outlook. The following screenshot shows the result of a Shared Document Library from a SharePoint 2013 site connected to Outlook 2013. Note the following features which in my view are awesome for User Adoption particularly from those whose centre of the universe happens to be the Outlook client; without going into jargon try to explain the following features:

  • That users are able to switch from connected library to connected library using the navigation options, each connected library shows the number of unread items (un-previewed or un-opened documents).
  • That each document (if the previewer is available) when clicked on will display a preview of the document; meaning that you can read a Word Document, for example, without having to open it in the client application.
  • That information concerning the state of the document is displayed, showing the last modifier, whether the document is checked out, when it was last modified and the document size.

 

Note. There is a problem I have noticed in the preview section when highlighting any file whilst working with SharePoint 2013, Office 2013 on a sandbox; the message:

‘This file cannot be previewed because of an error with the following previewer: Microsoft xxxxx previewer – To open this file in its own program, double-click it;’.

There is an article that seems to describe the issue (but does not directly mention when it’s likely to occur); and is known to Microsoft. A description of the alternatives whilst a fix is being provided here: http://support.microsoft.com/kb/983097. I will further investigate this and update this article.

Easy and Practical Tips on How To Achieve Code Maintainability

code_refactoring[1]No such thing as prototyping. I had the pleasure to work with a team who had to maintain and extend a project that started its life as a prototype. A couple of bright programmers put together a concept they thought would benefit their company.

 

It was quickly slapped together and shown to their bosses and a marketing type or two. I know a few of you know where I’m going with this. Within a month, the marketing types had a client signed on to use this new service. Having been thrown together without any real architecture, the product was a real problem to maintain.

I’m not saying never prototype, but when you do, keep in mind, this code may not be thrown away. Code the prototype using good software design skills.

Use TDD. I’m going to approach this from a different direction than most do. Yes, Test Driven Development is great for maintainability because if you break something, your tests will let you know it. But it also is a great way to document your code. Documentation in the comments is often wrong because the code changes, but the documentation doesn’t, or because the original programmer does not take the time to write well understood documentation. Or, most likely, the original programmer never gets around to writing the comments at all. As long as the tests are being run, they will usually be a reflection of how the programmer expected the process to perform.

If you fully test your code, the next programmer can get a really good idea of what your were thinking about by how you setup your tests. There is a lot you can learn by how you build the objects you pass into your code.

Don’t be cute. Sure, that way of using a while loop instead of a for loop is cool and different, but the next programmer has no idea why you did it. If you need to do something non-standard, document it in place and include why.

Peer review. We all get in a hurry trying to meet a deadline and the brain locks up when trying to remember how to do something simple. We write some type of hack to get around it, thinking we’ll go back and fix it later when more sleep and more caffeine have done their magic. But by then, you’ve slept and forgotten all about the hack. Having to explain why you did it that way to another programmer keeps some really bizarre code from getting checked in.

Build process and dependency control. At first glance, this may not seem to be an important part of writing maintainable code. Starting on any project, there is a huge curve in getting to know and understand that project. If you can get past spending time to figure out what dependencies the project requires and what settings you have to change on your IDE, you’ve cut down a bit on the time it takes to maintain the project.

Read, read, and read some more. It’s a great time to be a programmer. There are tons of articles and blogs that contain sample code all over the Internet. The publishing industry is trying hard to keep up with the ever-changing landscape. Reading code that contains best practices is an obvious way to improve your own code and help you create maintainable code. But also, reading code that does not follow best practices is a great way to see how not to do it. The trick is to know the difference between the two.

Refactor. When you have the logic worked out and your code now works, take some time to look through your code and see where you can tighten it up. This is a good time to see if you’ve repeated code that can be moved into its own method. Use this time to read the code like a maintenance programmer. Would you understand this code if you saw it for the first time?

Leave the code in better condition than when you found it. Many of us are loath to change working code just because it’s “ugly,” and I’m not given out a license to wholesale refactor any process you open in an editor. If you’re updating code that is surrounded by hard-to-maintain code, don’t take that as permission to write more bad code.

Final Thoughts

Thinking that your code will never be touched again is many things, but especially unrealistic. At the very least, business requirements change and may require modifications to your code. You may even be the person that has to maintain it, and trust me, after 6 months or so of writing other code, you will not be in the same frame of mind. Spend some time writing code that won’t be cursed by the next programmer.

How To : Develop and Deploy Azure Applications Deep Dive (Part 1)

micorosftazurelogo[1]

 

Senior C# SharePoint Developer with 10 years experience and BSC Degree looking for serious new technical challenges and collaborative, team environment (Gauteng, South Africa)

CV available at http://1drv.ms/1lR6qtO

hero-for-hire_basic-layout_600

There are many reasons to deploy an application or services onto Azure, the Microsoft cloud services platform. These include reducing operation and hardware costs by paying for just what you use, building applications that are able to scale almost infinitely, enormous storage capacity, geo-location … the list goes on and on.

Yet a platform is intellectually interesting only when developers can actually use it. Developers are the heart and soul of any platform release—the very definition of a successful release is the large number of developers deploying applications and services on it. Microsoft has always focused on providing the best development experience for a range of platforms—whether established or emerging—with Visual Studio, and that continues for cloud computing. Microsoft added direct support for building Azure applications to Visual Studio 2010 and Visual Web Developer 2010 Express.

This article will walk you through using Visual Studio 2010 for the entirety of the Azure application development lifecycle. Note that even if you aren’t a Visual Studio user today, you can still evaluate Azure development for free, using the Azure support in Visual Web Developer 2010 Express.

Creating a Cloud Service

Start Visual Studio 2010, click on the File menu and choose New | Project to bring up the New Project dialog. Under Installed Templates | Visual C# (or Visual Basic), select the Cloud node. This displays an Enable Azure Tools project template that, when clicked, will show you a page with a button to install the Azure Tools for Visual Studio.

Before installing the Azure Tools, be sure to install IIS on your machine. IIS is used by the local development simulation of the cloud. The easiest way to install IIS is by using the Web Platform Installer available at microsoft.com/web. Select the Platform tab and click to include the recommended products in the Web server.

Download and install the Azure Tools and restart Visual Studio. As you’ll see, the Enable Azure Tools project template has been replaced by a Azure Cloud Service project template. Select this template to bring up the New Cloud Service Project dialog shown in Figure 1. This dialog enables you to add roles to a cloud service.

image: Adding Roles to a New Cloud Service Project

Figure 1 Adding Roles to a New Cloud Service Project

A Azure role is an individually scalable component running in the cloud where each instance of a role corresponds to a virtual machine (VM) instance.

There are two types of role:

  • A Web role is a Web application running on IIS. It is accessible via an HTTP or HTTPS endpoint.
  • A Worker role is a background processing application that runs arbitrary .NET code. It also has the ability to expose Internet-facing and internal endpoints.

As a practical example, I can have a Web role in my cloud service that implements a Web site my users can reach via a URL such as http://%5Bsomename%5D.cloudapp.net. I can also have a Worker role that processes a set of data used by that Web role.

I can set the number of instances of each role independently, such as three Web role instances and two Worker role instances, and this corresponds to having three VMs in the cloud running my Web role and two VMs in the cloud running my Worker role.

You can use the New Cloud Service Project dialog to create a cloud service with any number of Web and Worker roles and use a different template for each role. You can choose which template to use to create each role. For example, you can create a Web role using the ASP.NET Web Role template, WCF Service Role template, or the ASP.NET MVC Role template.

After adding roles to the cloud service and clicking OK, Visual Studio will create a solution that includes the cloud service project and a project corresponding to each role you added. Figure 2 shows an example cloud service that contains two Web roles and a Worker role.

image: Projects Created for Roles in the Cloud Service

Figure 2 Projects Created for Roles in the Cloud Service

The Web roles are ASP.NET Web application projects with only a couple of differences. WebRole1 contains references to the following assemblies that are not referenced with a standard ASP.NET Web application:

  • Microsoft.WindowsAzure.Diagnostics (diagnostics and logging APIs)
  • Microsoft.WindowsAzure.ServiceRuntime (environment and runtime APIs)
  • Microsoft.WindowsAzure.StorageClient (.NET API to access the Azure storage services for blobs, tables and queues)

The file WebRole.cs contains code to set up logging and diagnostics and a trace listener in the web.config/app.config that allows you to use the standard .NET logging API.

The cloud service project acts as a deployment project, listing which roles are included in the cloud service, along with the definition and configuration files. It provides Azure-specific run, debug and publish functionality.

It is easy to add or remove roles in the cloud service after project creation has completed. To add other roles to this cloud service, right-click on the Roles node in the cloud service and select Add | New Web Role Project or Add | New Worker Role Project. Selecting either of these options brings up the Add New Role dialog where you can choose which project template to use when adding the role.

You can add any ASP.NET Web Role project to the solution by right-clicking on the Roles node, selecting Add | Web Role Project in the solution, and selecting the project to associate as a Web role.

To delete, simply select the role to delete and hit the Delete key. The project can then be removed.

You can also right-click on the roles under the Roles node and select Properties to bring up a Configuration tab for that role (see Figure 3). This Configuration tab makes it easy to add or modify the values in both the ServiceConfiguration.cscfg and ServiceDefinition.csdef files.

Figure 3 Configuring a Role

Figure 3 Configuring a Role

When developing for Azure, the cloud service project in your solution must be the StartUp project for debugging to work correctly. A project is the StartUp project when it is shown in bold in the Solution Explorer. To set the active project, right-click on the project and select Set as StartUp project.

Data in the Cloud

Now that you have your solution set up for Azure, you can leverage your ASP.NET skills to develop your application.

As you are coding, you’ll want to consider the Azure model for making your application scalable. To handle additional traffic to your application, you increase the number of instances for each role. This means requests will be load-balanced across your roles, and that will affect how you design and implement your application.

In particular, it dictates how you access and store your data. Many familiar data storage and retrieval methods are not scalable, and therefore are not cloud-friendly. For example, storing data on the local file system shouldn’t be used in the cloud because it doesn’t scale.

To take advantage of the scaling nature of the cloud, you need to be aware of the new storage services. Azure Storage provides scalable blob, queue, and table storage services, and Microsoft SQL Azure provides a cloud-based relational database service built on SQL Server technologies. Blobs are used for storage of named files along with metadata. The queue service provides reliable storage and delivery of messages. The table service gives you structured storage, where a table is a set of entities that each contain a set of properties.

To help developers use these services, the Azure SDK ships with a Development Storage service that simulates the way these storage services run in the cloud. That is, developers can write their applications targeting the Development Storage services using the same APIs that target the cloud storage services.

Debugging

To demonstrate running and debugging on Azure locally, let’s use one of the samples from code.msdn.microsoft.com/windowsazuresamples. This MSDN Code Gallery page contains a number of code samples to help you get started with building scalable Web application and services that run on Azure. Download the samples for Visual Studio 2010, then extract all the files to an accessible location like your Documents folder.

The Development Fabric requires running in elevated mode, so start Visual Studio 2010 as an administrator. Then, navigate to where you extracted the samples and open the Thumbnails solution, a sample service that demonstrates the use of a Web role and a Worker role, as well as the use of the StorageClient library to interact with both the Queue and Blob services.

When you open the solution, you’ll notice three different projects. Thumbnails is the cloud service that associates two roles, Thumbnails_WebRole and Thumbnails_WorkerRole. Thumbnails_WebRole is the Web role project that provides a front-end application to the user to upload photos and adds a work item to the queue. Thumbnails_WorkerRole is the Worker role project that fetches the work item from the queue and creates thumbnails in the designated directory.

Add a breakpoint to the submitButton_Click method in the Default.aspx.cs file. This breakpoint will get hit when the user selects an image and clicks Submit on the page.

 
protected void submitButton_Click(
  object sender, EventArgs e) {
  if (upload.HasFile) {
    var name = string.Format("{0:10}", DateTime.Now.Ticks, 
      Guid.NewGuid());
    GetPhotoGalleryContainer().GetBlockBlob(name).UploadFromStream(upload.FileContent);

Now add a breakpoint in the Run method of the worker file, WorkerRole.cs, right after the code that tries to retrieve a message from the queue and checks if one actually exists. This breakpoint will get hit when the Web role puts a message in the queue that is retrieved by the worker.

 
while (true) {
  try {
    CloudQueueMessage msg = queue.GetMessage();
    if (msg != null) {
      string path = msg.AsString

To debug the application, go to the Debug menu and select Start Debugging. Visual Studio will build your project, start the Development Fabric, initialize the Development Storage (if run for the first time), package the deployment, attach to all role instances, and then launch the browser pointing to the Web role (see Figure 4).

image: Running the Thumbnails Sample

Figure 4 Running the Thumbnails Sample

At this point, you’ll see that the browser points to your Web role and that the notifications area of the taskbar shows the Development Fabric has started. The Development Fabric is a simulation environment that runs role instances on your machine in much the way they run in the real cloud.

Right-click on the Azure notification icon in the taskbar and click on Show Development Fabric UI. This will launch the Development Fabric application itself, which allows you to perform various operations on your deployments, such as viewing logs and restarting and deleting deployments (see Figure 5). Notice that the Development Fabric contains a new deployment that hosts one Web role instance and one Worker role instance.

image: The Development Fabric

Figure 5 The Development Fabric

Look at the processes that Visual Studio attached to (Debug/Windows/Processes); you’ll notice there are three: WaWebHost.exe, WaWorkerHost.exe and iexplore.exe.

WaWebHost (Azure Web instance Host) and WaWorkerHost (Azure Worker instance Host) host your Web role and Worker role instances, respectively. In the cloud, each instance is hosted in its own VM, whereas on the local development simulation each role instance is hosted in a separate process and Visual Studio attaches to all of them.

By default, Visual Studio attaches using the managed debugger. If you want to use another one, like the native debugger, pick it from the Properties of the corresponding role project. For Web role projects, the debugger option is located under the project properties Web tab. For Worker role projects, the option is under the project properties Debug tab.

By default, Visual Studio uses the script engine to attach to Internet Explorer. To debug Silverlight applications, you need to enable the Silverlight debugger from the Web role project Properties.

Browse to an image you’d like to upload and click Submit. Visual Studio stops at the breakpoint you set inside the submitButton_Click method, giving you all of the debugging features you’d expect from Visual Studio. Hit F5 to continue; the submitButton_Click method generates a unique name for the file, uploads the image stream to Blob storage, and adds a message on the queue that contains the file name.

Now you will see Visual Studio pause at the breakpoint set in the Worker role, which means the worker received a message from the queue and it is ready to process the image. Again, you have all of the debugging features you would expect.

Hit F5 to continue, the worker will get the file name from the message queue, retrieve the image stream from the Blob service, create a thumbnail image, and upload the new thumbnail image to the Blob service’s thumbnails directory, which will be shown by the Web role.

Next installment, we will look at the Deployment Process

Understanding Business Intelligence

Though Business Intelligence is technology driven, it is more about Business requirements and less about technology.  BI Champion/ Sponsor in the organization defines the vision and mission.

BI-infographic1[1]

The leader must have the ability to precisely define the Business intelligence requirements of the organization; the format of the reports; the relationship between the different data elements and version of the data to be used. The leaders need to specify how much history needs to be included; how often the data needs to be provided to the different stakeholders of the process.

BI is then driven by the business objective which may not necessarily always be to reduce the cost or increase the top line/ bottom line for an organization, but to use the data analysis in bringing efficiencies in the process or enhance the service experience for the customer.

The impetus for the BI initiative is the availability of data and technology for data analysis.  Ideally the organization should have at least two or three years of data in electronic format for meaningful analysis. The larger the volume of core data available in the systems, the more meaningful will be the output that can be expected from Business Intelligence.

However it must be recognized that data may not always be structured and data coming from multiple sources may not be in the same format. Data will have to be interpreted on the basis of logical assumptions and data gaps will have to be identified upfront so that changes can be made to business processes at the point of data capture.

The Management must be made aware that resource and time commitment for BI is much higher than resource commitment for IT projects and there must be a high level of management commitment to making the BI project a success.

  1. Business leaders and IT analysts will have to allocate substantial amount of time to mapping business requirements to IT system capabilities both during the design and implementation phase of the project.  The leaders and IT developers will have to constantly interact for a proper representation of the Business questions in terms of analytical outputs.

2.      IT professionals involved in the process will have to explain to the Business leadership the nature and meaning of the data that is available in the systems. They will have to clarify how exceptions are to be handled by the leadership and also the limitations of the systems. The implication is that IT resources will also have to make a time commitment for BI.

  1. Subject matter experts also have a significant role to play in Business Intelligence projects.  As ultimate users of the system, they are in the best position to test the outputs and validate the significance of the Queries and the outputs derived from such queries.  They have the experience and the ability to flag exceptions and help fix them.  This implies that subject matter experts too need to contribute a significant amount of time for the success of the project.  They may even need to join the project team on a full time basis during the testing phase.

The Project plan must reflect the level of business personnel who will be involved and the organization must be willing to release these people to join the project team as and when a request for their services is made.

It also follows that the project team and all those involved in BI must be open to learning and acquiring the skills that are essential for the effective functioning of the BI system that is being put in place.

Having said all this, it is necessary to point out that Business Intelligence cannot be a time bound project. Nor can the teams be disbanded with the first successful run of a data query or queries.

Query design, testing, redesign and use of new toolsets are inevitable. In short, BI is an evolving system that cannot be pinned down and bounded by traditional project management definitions.

The BI requirements change as business requirements evolve and change. No single requirement definition can be characterized as a permanent, unchanging requirement.  The Business leaders, IT analysts and Subject matter experts will have to be constantly engaged in designing and developing queries; testing the outputs on field formations; obtaining feedback on the usefulness or otherwise of the outputs and exceptions that need to be handled.  It is an iterative process.

It requires the institution of an agile system development and process management approach. Highly skilled personnel must constantly and continuously work together to deliver on the objectives of BI with little requirements being defined upfront with more requirements being designed and refined on the go.

Scope of work will have to be “time-boxed” to each cycle of the project and goals and objectives of each time box will have to be specified separately.  Cycles which cannot be completed within the time specified will have to be deferred and included as part of the future development cycles. As a result, traditional project management methodologies will fail.

Since change is the only constant in BI, definition of a change management strategy is an imperative. The strategy must be built around the recognition that Business requirements change, changing BI requirements/queries; resulting in a change in the type of toolsets used and the skillsets that are required by BI stakeholders.

It should be remembered that People are resistant to change. Consumers of BI reports are no exceptions. They need to be educated about the benefits of the exercise and the producers of the data must be aligned to ensure data quality is never compromised by placing appropriate controls in the systems.

Training needs must be studied, documented; trainings organized whenever there is a change in any one or more components that operationalize the Business Intelligence unit.

The organization must also recognize that reporting requirements and formats will change with every change in the BI requirement. All reporting formats cannot be axed at once and all reporting formats cannot change overnight. The change must be planned and initiated based on schedules that have been agreed upon by the different stakeholders.

Existing reports and tools should be retired gradually and transition periods must be orchestrated carefully and thoughtfully. Trainings must be organized to transition all stakeholders to the new formats.  Organization of workshops and change management seminars must be part and parcel of the Business Intelligence unit’s functioning.

Finally, it must be reiterated that BI is not a project. It is a program.  The solution must dovetail into the existing environment and reinforce the business processes that are in use.  Any data extraction exercise for BI must be done without disturbing the workflow in the organization or impacting the reliability of the information that is being gathered during business operations.

How To : Use the Office 365 API Client Libraries (Javascript and .Net)

blog-office365

One of the cool things with today’s Office 365 API Tooling update is that you can now access the Office 365 APIs using libraries available for .NET and JavaScript.

 

\\8These libraries make it easier to interact with the REST APIs from the device or platform of your choice. And when I say platform of your choice, it really is! Office 365 API and the client libraries support the following project types in Visual Studio today:https://sharepointsamurai.wordpress.com/wp-admin/post.php?post=1625&action=edit&message=10

  1. NET Windows Store Apps
  2. .NET Windows Store Universal Apps
  3. Windows Forms Applications
  4. WPF Applications
  5. ASP.NET MVC Web Applications
  6. ASP.NET Web Forms Applications
  7. Xamarin Android and iOS Applications
  8. Multi-device Hybrid Apps

p.s: support for more projects coming on the way….

Few Things Before We Get Started

  • The authentication library is released as “alpha”.
    • If you don’t see something you want or if you think we missed addressing some scenarios/capabilities, let us know!
    • In this initial release of the authentication library, we focused on simplifying the getting started experience, especially for Office 365 services and not so much on the interoperability across other services (that support OAuth) but that’s something we can start looking for next updates to make it more generic.
  • The library is not meant to replace Active Directory Authentication Library (ADAL) but it is a wrapper over it (where it exists) which gives you a focused getting started experience.
    • However, If you want to opt out and go “DIY”, you still can.

Setting Up Authentication

The first step to accessing Office 365 APIs via the client library is to get authenticated with Office 365.

Once you configure the required Office 365 service and its permissions, the tool will add the required client libraries for authentication and the service into your project.

Lets quickly look at what authenticating your client looks like.

Getting Authenticated

Office 365 APIs use OAuth Common Consent Framework for authentication and authorization.

Below is the code to authenticate your .NET application:

Authenticator authenticator = new Authenticator();

AuthenticationInfo authInfo =
await authenticator.AuthenticateAsync(ExchangeResourceId);

Below is the JS code snippet used for authentication in Cordova projects:

var authContext = new O365Auth.Context();
authContext.getIdToken('https://outlook.office365.com/')
.then((function (token) {
    var client = new Exchange.Client('https://outlook.office365.com/ews/odata', 
                         token.getAccessTokenFn('https://outlook.office365.com'));
    client.me.calendar.events.getEvents().fetch()
        .then(function (events) {
            // get currentPage of events and logout
            var myevents = events.currentPage;
            authContext.logOut();
        }, function (reason) {
            // handle error
        });
}).bind(this), function (reason) {
    // handle error
});

Authenticator Class

The Authenticator class initializes the key stuff required for authentication:

1) Office 365 app client Id

2) Redirect URI

3) Authentication URI

You can find these settings in:

– For Web Applications – web.config

– For Windows Store Apps – App.xaml

– For Desktop Applications (Windows Forms & WPF) – AssemblyInfo.cs/.vb

– For Xamarin Applications – AssemblyInfo.cs

If you would like to provide these values at runtime and not from the config files, you can do so by using the alternate constructor:

image

To authenticate, you call the AuthenticateAsync method by passing the service’s resource Id:

AuthenticationInfo authInfo = await authenticator.AuthenticateAsync(ExchangeResourceId);

If you are using the discovery service, you can specify the capability instead of the resource Id:

AuthenticationInfo authInfo =
await authenticator.AuthenticateAsync("Mail", ServiceIdentifierKind.Capability);

The string to use for other services if you use discovery service: Calendar, Contacts and MyFiles

NOTE:

– For now, if you want to use the discovery service, you will also need to configure a SharePoint resource, either Sites or My Files. This is because the discovery service currently uses SharePoint resource Id.

– Active Directory Graph & Sites do not support discovery service yet

Depending on your client, the AuthenticateAsync will open the appropriate window for you to authenticate:

– For web applications, you will be redirected to login page to authenticate

– For Windows Store Apps, you will get dialog box to authenticate

– For desktop apps, you will get a dialog window to authenticate

image

AuthenticatorInfo Class

Once successfully authenticated, the method returns an AuthenticatorInfo object which helps you to get the required access token:

ExchangeClient client =
new ExchangeClient(new Uri(ExchangeServiceRoot), authInfo.GetAccessToken);

And also help you re-authenticate for a different resource when you create the service client.

AuthenticationInfo graphAuthInfo =
    await authInfo.ReauthenticateAsync("https://graph.windows.net/");

The library automatically handles token lifetime management by monitoring the expiration time of the access token and performing a refresh automatically.

Thats it! – Now you can make subsequent calls to the service to return the items you want!

Authentication Library

For .NET projects:

The library is available as a Nuget package. So, if you want to add it manually to your project without the tool, you could do so. However, you will have to manually register an app in the Azure Active Directory to authenticate against AAD.

Microsoft Office 365 Authentication Library for ASP.NET

Microsoft Office 365 Authentication Library for .NET (Android and iOS)

Microsoft Office 365 Authentication Library for ASP.NET

For Cordova projects:

You will need to use the Office 365 API tool which generates the aadgraph.js under the Scripts folder that handles authentication.

How To : Use JavaScript: Error Handling to Build More Efficient Windows Store Apps

winjs_life_02[1]

Believe it or not, sometimes app developers write code that doesn’t work. Or the code works but is terribly inefficient and hogs memory. Worse yet, inefficient code results in a poor UX, driving users crazy and compelling them to uninstall the app and leave bad reviews.

I’m going to explore common performance and efficiency problems you might encounter while building Windows Store apps with JavaScript. In this article, I take a look at best practices for error handling using the Windows Library for JavaScript (WinJS). In a future article, I’ll discuss techniques for doing work without blocking the UI thread, specifically using Web Workers or the new WinJS.Utilities.Scheduler API in WinJS 2.0, as found in Windows 8.1. I’ll also present the new predictable-object lifecycle model in WinJS 2.0, focusing particularly on when and how to dispose of controls.

For each subject area, I focus on three things:

  • Errors or inefficiencies that might arise in a Windows Store app built using JavaScript.
  • Diagnostic tools for finding those errors and inefficiencies.
  • WinJS APIs, features and best practices that can ameliorate specific problems.

I provide some purposefully buggy code but, rest assured, I indicate in the code that something is or isn’t supposed to work.

I use Visual Studio 2013, Windows 8.1 and WinJS 2.0 for these demonstrations. Many of the diagnostic tools I use are provided in Visual Studio 2013. If you haven’t downloaded the most-recent versions of the tools, you can get them from the Windows Dev Center (bit.ly/K8nkk1). New diagnostic tools are released through Visual Studio updates, so be sure to check for updates periodically.

I assume significant familiarity with building Windows Store apps using JavaScript. If you’re relatively new to the platform, I suggest beginning with the basic “Hello World” example (bit.ly/vVbVHC) or, for more of a challenge, the Hilo sample for JavaScript (bit.ly/SgI0AA).

Setting up the Example

First, I create a new project in Visual Studio 2013 using the Navigation App template, which provides a good starting point for a basic multipage app. I also add a NavBar control (bit.ly/14vfvih) to the default.html page at the root of the solution, replacing the AppBar code the template provided. Because I want to demonstrate multiple concepts, diagnostic tools and programming techniques, I’ll add a new page to the app for each demonstration. This makes it much easier for me to navigate between all the test cases.

The complete HTML markup for the NavBar is shown in Figure 1. Copy and paste this code into your solution if you’re following along with the example.

Figure 1 The NavBar Control

<!-- The global navigation bar for the app. -->
<div id="navBar" data-win-control="WinJS.UI.NavBar">
  <div id="navContainer" 
       data-win-control="WinJS.UI.NavBarContainer">
    <div id="homeNav" 
      data-win-control="WinJS.UI.NavBarCommand"
      data-win-options="{
        location: '/pages/home/home.html',
        icon: 'home',
        label: 'Home page'
    }">
    </div>
    <div id="handlingErrors"
      data-win-control="WinJS.UI.NavBarCommand"
      data-win-options="{
        location: '/pages/handlingErrors/handlingErrors.html',
        icon: 'help',
        label: 'Handling errors'
    }">
    </div>
    <div id="chainedAsync"
      data-win-control="WinJS.UI.NavBarCommand"
      data-win-options="{
        location: '/pages/chainedAsync/chainedAsync.html',
        icon: 'link',
        label: 'Chained asynchronous calls'
    }">
    </div>
  </div>
</div>

For more information about building a navigation bar, check out some of the Modern Apps columns by Rachel Appel, such as the one at msdn.microsoft.com/magazine/dn342878.

You can run this project with just the navigation bar, except that clicking any of the navigation buttons will raise an exception in navigator.js. Later in this article, I’ll discuss how to handle errors that come up in navigator.js. For now, remember the app always starts on the home­page and you need to right-click the app to bring up the navigation bar.

Handling Errors

Obviously, the best way to avoid errors is to release apps that don’t raise errors. In a perfect world, every developer would write perfect code that never crashes and never raises an exception. That perfect world doesn’t exist.

As much as users prefer apps that are completely error-free, they are exceptionally good at finding new and creative ways to break apps—ways you never dreamed of. As a result, you need to incorporate robust error handling into your apps.

Errors in Windows Store apps built with JavaScript and HTML act just like errors in normal Web pages. When an error happens in a Document Object Model (DOM) object that allows for error handling (for example, the <script>, <style> or <img> elements), the onerror event for that element is raised. For errors in the JavaScript call stack, the error travels up the chain of calls until caught (in a try/catch block, for instance) or until it reaches the window object, raising the window.onerror event.

WinJS provides several layers of error-handling opportunities for your code in addition to what’s already provided to normal Web pages by the Microsoft Web Platform. At a fundamental level, any error not trapped in a try/catch block or the onError handler applied to a WinJS.Promise object (in a call to the then or done methods, for example) raises the WinJS.Application.onerror event. I’ll examine that shortly.

In practice, you can listen for errors at other levels in addition to Application.onerror. With WinJS and the templates provided by Visual Studio, you can also handle errors at the page-control level and at the navigation level. When an error is raised while the app is navigating to and loading a page, the error triggers the navigation-level error handling, then the page-level error handling, and finally the application-level error handling. You can cancel the error at the navigation level, but any event handlers applied to the page error handler will still be raised.

In this article, I’ll take a look at each layer of error handling, starting with the most important: the Application.onerror event.

Application-Level Error Handling

WinJS provides the WinJS.Application.onerror event (bit.ly/1cOctjC), your app’s most basic line of defense against errors. It picks up all errors caught by window.onerror.” It also catches promises that error out and any errors that occur in the process of managing app model events. Although you can apply an event handler to the window.onerror event in your app, you’re better off just using Application.onerror for a single queue of events to monitor.

Once the Application.onerror handler catches an error, you need to decide how to address it. There are several options:

  • For critical blocking errors, alert the user with a message dialog. A critical error is one that affects continued operation of the app and might require user input to proceed.
  • For informational and non-blocking errors (such as a failure to sync or obtain online data), alert the user with a flyout or an inline message.
  • For errors that don’t affect the UX, silently swallow the error.
  • In most cases, write the error to a tracelog (especially one that’s hooked up to an analytics engine) so you can acquire customer telemetry. For available analytics SDKs, visit the Windows services directory at services.windowsstore.com and click on Analytics (under “By service type”) in the list on the left.

For this example, I’ll stick with message dialogs. I open up default.js (/js/default.js) and add the code shown in Figure 2 inside the main anonymous function, below the handler for the app.oncheckpoint event.

Figure 2 Adding a Message Dialog

app.onerror = function (err) {
  var message = err.detail.errorMessage ||
    (err.detail.exception && err.detail.exception.message) ||
    "Indeterminate error";
  if (Windows.UI.Popups.MessageDialog) {
    var messageDialog =
      new Windows.UI.Popups.MessageDialog(
        message,
        "Something bad happened ...");
    messageDialog.showAsync();
    return true;
  }
}

In this example, the error event handler shows a message telling the user an error has occurred and what the error is. The event handler returns true to keep the message dialog open until the user dismisses it. (Returning true also informs the WWAHost.exe process that the error has been handled and it can continue.)

hero-for-hire_basic-layout_600Senior C# & SharePoint Developer with 10 year’s development experience

BSC degree in Computer Science and Information Systems

5 years experience in delivering SharePoint based solutions using OOB functionality and Custom Development

Extensive experience in
• Microsoft SharePoint platform, App Model (2010 & 2013)
• C# 2.0 – 4.5
• Advanced Workflow (Visual Studio, K2, Nintex)
• Development of Custom Web Parts
• Master Page Dev & Branding
• Integration of Back-end systems, including 3 SAP Projects, MS CRM, K2 BlackPearl,
Custom LOB Systems
• SQL Server (design,development, stored procedures, triggers)
• BCS, BDC – Implementing WCF, REST Services, Web Services
• SharePoint Excel Services, PowerPivot, Word Automation Services
• Custom Reports (MS SQL Reporting, Crystal Reports)
• Objected Oriented Programming and Patterns
• TFS 2010-2013
• Agile & SCRUM methodologies (ALM / SDLC)
• Microsoft Azure as database and hosting hybrid solutions
• Office 365 and SharePoint App Development

 

Now I’ll create some errors for this code to handle. I’ll create a custom error, throw the error and then catch it in the event handler. For this first example, I add a new folder named handling­Errors to the pages folder. In the folder, I add a new Page Control by right-clicking the project in Solution Explorer and selecting Add | New Item. When I add the handlingErrors Page Control to my project, Visual Studio provides three files in the handlingErrors folder (/pages/handlingErrors): handlingErrors.html, handling­Errors.js and handlingErrors.css.

I open up handlingErrors.html and add this simple markup inside the <section> tag of the body:

<!-- When clicked, this button raises a custom error. -->
<button id="throwError">Throw an error!</button>

Next, I open handlingErrors.js and add an event handler to the button in the ready method of the PageControl object, as shown in Figure 3. I’ve provided the entire PageControl definition in handlingErrors.js for context.

Figure 3 Definition of the handlingErrors PageControl

// For an introduction to the Page Control template, see the following documentation:
// http://go.microsoft.com/fwlink/?LinkId=232511
(function () {
  "use strict";
  WinJS.UI.Pages.define("/pages/handlingErrors/handlingErrors.html", {
    ready: function (element, options) {
      // ERROR: This code raises a custom error.
      throwError.addEventListener("click", function () {
        var newError = new WinJS.ErrorFromName("Custom error", 
          "I'm an error!");
        throw newError;
      });
    },
    unload: function () {
      // Respond to navigations away from this page.
      },
    updateLayout: function (element) {
      // Respond to changes in layout.
    }
  });
})();

Now I press F5 to run the sample, navigate to the handling­Errors page and click the “Throw an error!” button. (If you’re following along, you’ll see a dialog box from Visual Studio informing you an error has been raised. Click Continue to keep the sample running.) A message dialog then pops up with the error, as shown in Figure 4.

The Custom Error Displayed in a Message Dialog
Figure 4 The Custom Error Displayed in a Message Dialog

Custom Errors

The Application.onerror event has some expectations about the errors it handles. The best way to create a custom error is to use the WinJS.ErrorFromName object (bit.ly/1gDESJC). The object created exposes a standard interface for error handlers to parse.

To create your own custom error without using the ErrorFromName object, you need to implement a toString method that returns the message of the error.

Otherwise, when your custom error is raised, both the Visual Studio debugger and the message dialog show “[Object object].” They each call the toString method for the object, but because no such method is defined in the immediate object, it goes through the chain of prototype inheritance for a definition of toString. When it reaches the Object primitive type that does have a toString method, it calls that method (which just displays information about the object).

Page-Level Error Handling

The PageControl object in WinJS provides another layer of error handling for an app. WinJS will call the IPageControlMembers.error method when an error occurs while loading the page. After the page has loaded, however, the IPageControlMembers.error method errors are picked up by the Application.onerror event handler, ignoring the page’s error method.

I’ll add an error method to the PageControl that represents the handleErrors page. The error method writes to the JavaScript console in Visual Studio using WinJS.log. The logging functionality needs to be started up first, so I need to call WinJS.Utilities.startLog before I attempt to use that method. Also note that I check for the existence of the WinJS.log member before I actually call it.

The complete code for handleErrors.js (/pages/handleErrors/handleErrors.js) is shown in Figure 5.

Figure 5 The Complete handleErrors.js

(function () {
  "use strict";
  WinJS.UI.Pages.define("/pages/handlingErrors/handlingErrors.html", {
    ready: function (element, options) {
      // ERROR: This code raises a custom error.      
      throwError.addEventListener("click", function () {
        var newError = {
          message: "I'm an error!",
          toString: function () {
            return this.message;
          }
        };
        throw newError;
      })
    },
    error: function (err) {
      WinJS.Utilities.startLog({ type: "pageError", tags: "Page" });
      WinJS.log && WinJS.log(err.message, "Page", "pageError");
    },
    unload: function () {
      // TODO: Respond to navigations away from this page.
    },
    updateLayout: function (element) {
      // TODO: Respond to changes in layout.
    }
  });
})();

WinJS.log

The call to WinJS.Utilities.startLog shown in Figure 5 starts the WinJS.log helper function, which writes output to the JavaScript console by default. While this helps greatly during design time for debugging, it doesn’t allow you to capture error data after users have installed the app.

For apps that are ready to be published and deployed, you should consider creating your own implementation of WinJS.log that calls into an analytics engine. This allows you to collect telemetry data about your app’s performance so you can fix unforeseen bugs in future versions of your app. Just make sure customers are aware of the data collection and that you clearly list what data gets collected by the analytics engine in your app’s privacy statement.

Note that when you overwrite WinJS.log in this way, the WinJS.log function will catch all output that would otherwise go to the JavaScript console, including things like status updates from the Scheduler. This is why you need to pass a meaningful name and type value into the call to WinJS.Utilities.startLog so you can filter out any errors you don’t want.

Now I’ll try running the sample and clicking “Throw an error!” again. This results in the exact same behavior as before: Visual Studio picks up the error and then the Application.onerror event fires. The JavaScript console doesn’t show any messages related to the error because the error was raised after the page loaded. Thus, the error was picked up only by the Application.onerror event handler.

So why use the PageControl error handling? Well, it’s particularly helpful for catching and diagnosing errors in WinJS controls that are created declaratively in the HTML. For example, I’ll add the following HTML markup inside the <section> tags of handleErrors.html (/pages/handleErrors/handleErrors.html), below the button:

<!-- ERROR: AppBarCommands must be button elements by default
  unless specified otherwise by the 'type' property. -->
<div data-win-control="WinJS.UI.AppBarCommand"></div>

Now I press F5 to run the sample and navigate to the handleErrors page. Again, the message dialog appears until dismissed. However, the following message appears in the JavaScript console (you’ll need to switch back to the desktop to check this):

pageError: Page: Invalid argument: For a button, toggle, or flyout   command, the element must be null or a button element

Note that the app-level error handling appeared even though I handled the error in the PageControl (which logged the error). So how can I trap an error on a page without having it bubble up to the application?

The best way to trap a page-level error is to add error handling to the navigation code. I’ll demonstrate that next.

Navigation-Level Error Handling

When I ran the previous test where the app.on­error event handler handled the page-level error, the app seemed to stay on the homepage. Yet, for some reason, a Back button control appeared. When I clicked the Back button, it took me to a (disabled) handlingErrors.html page.

This is because the navigation code in navigator.js (/js/navigator.js), which is provided in the Navigation App project template, still attempts to navigate to the page even though the page has fizzled. Furthermore, it navigates back to the homepage and adds the error-prone page to the navigation history. That’s why I see the Back button on the homepage after I’ve attempted to navigate to handlingErrors.html.

To cancel the error in navigator.js, I replace the PageControl­Navigator._navigating function with the code in Figure 6. You see that the navigating function contains a call to WinJS.UI.Pages.render, which returns a Promise object. The render method attempts to create a new PageControl from the URI passed to it and insert it into a host element. Because the resulting PageControl contains an error, the returned promise errors out. To trap the error raised during navigation, I add an error handler to the onError parameter of the then method exposed by that Promise object. This effectively traps the error, preventing it from raising the Application.onerror event.

Figure 6 The PageControlNavigator._navigating Function in navigator.js

// Other PageControlNavigator code ...
// Responds to navigation by adding new pages to the DOM.
_navigating: function (args) {
  var newElement = this._createPageElement();
  this._element.appendChild(newElement);
  this._lastNavigationPromise.cancel();
  var that = this;
  this._lastNavigationPromise = WinJS.Promise.as().then(function () {
    return WinJS.UI.Pages.render(args.detail.location, newElement,
       args.detail.state);
  }).then(function parentElement(control) {
    var oldElement = that.pageElement;
    // Cleanup and remove previous element
    if (oldElement.winControl) {
      if (oldElement.winControl.unload) {
        oldElement.winControl.unload();
      }
      oldElement.winControl.dispose();
    }
    oldElement.parentNode.removeChild(oldElement);
    oldElement.innerText = "";
  },
  // Display any errors raised by a page control,
  // clear the backstack, and cancel the error.
  function (err) {
    var messageDialog =
      new Windows.UI.Popups.MessageDialog(
        err.message,
        "Sorry, can't navigate to that page.");
    messageDialog.showAsync()
    nav.history.backStack.pop();
    return true;
  });
  args.detail.setPromise(this._lastNavigationPromise);
},
// Other PageControlNavigator code ...

Promises in WinJS

Creating promises and chaining promises—and the best practices for doing so—have been covered in many other places, so I’ll skip that discussion in this article. If you need more information, check out the blog post by Kraig Brockschmidt at bit.ly/1cgMAnu or Appendix A in his free e-book, “Programming Windows Store Apps with HTML, CSS, and JavaScript, Second Edition” (bit.ly/1dZwW1k).

Note that it’s entirely proper to modify navigator.js. Although it’s provided by the Visual Studio project template, it’s part of your app’s code and can be modified however you need.

In the _navigating function, I’ve added an error handler to the final promise.then call. The error handler shows a message dialog—as with the application-level error handling—and then cancels the error by returning true. It also removes the page from the navigation history.

When I run the sample again and navigate to handlingErrors.html, I see the message dialog that informs me the navigation attempt has failed. The message dialog from the application-level error handling doesn’t appear.

Tracking Down Errors in Asynchronous Chains

When building apps in JavaScript, I frequently need to follow one asynchronous task with another, which I address by creating promise chains. Chained promises will continue moving along through the tasks, even if one of the promises in the chain returns an error. A best practice is to always end a chain of promises with a call to the done method. The done function throws any errors that would’ve been caught in the error handler for any previous then statements. This means I don’t need to define error functions for each promise in a chain.

Even so, tracking errors down can be difficult in very long chains once they’re trapped in the call to promise.done. Chained promises can include multiple tasks, and any one of them could fail. I could set a breakpoint in every task to see where the error pops up, but that would be terribly time-consuming.

Here’s where Visual Studio 2013 comes to the rescue. The Tasks window (introduced in Visual Studio 2010) has been upgraded to handle asynchronous JavaScript debugging as well. In the Tasks window you can view all active and completed tasks at any given point in your app code.

For this next example, I’ll add a new page to the solution to demonstrate this awesome tool. In the solution, I create a new folder called chainedAsync in the pages folder and then add a new Page Control named chainAsync.html (which creates /pages/­chainedAsync/chainedAsync.html and associated .js and .css files).

In chainedAsync.html, I insert the following markup within the <section> tags:

<!-- ERROR:Clicking this button starts a chain reaction with an error. -->
<p><button id="startChain">Start the error chain</button></p>
<p id="output"></p>

In chainedAsync.js, I add the event handler shown in Figure 7for the click event of the startChain button to the ready method for the page.

Figure 7 The Contents of the PageControl.ready Function in chainedAsync.js

startChain.addEventListener("click", function () {
  goodPromise().
    then(function () {
        return goodPromise();
    }).
    then(function () {
        return badPromise();
    }).
    then(function () {
        return goodPromise();
    }).
    done(function () {
        // This *shouldn't* get called
    },
      function (err) {
          document.getElementById('output').innerText = err.toString();
    });
});

Last, I define the functions goodPromise and badPromise, shown in Figure 8, within chainAsync.js so they’re available inside the PageControl’s methods.

Figure 8 The Definitions of the goodPromise and badPromise Functions in chainAsync.js

function goodPromise() {
  return new WinJS.Promise(function (comp, err, prog) {
    try {
      comp();
    } catch (ex) {
      err(ex)
    }
  });
}
// ERROR: This returns an errored-out promise.
function badPromise() {
  return WinJS.Promise.wrapError("I broke my promise :(");
}

I run the sample again, navigate to the “Chained asynchronous” page, and then click “Start the error chain.” After a short wait, the message “I broke my promise :(” appears below the button.

Now I need to track down where that error occurred and figure out how to fix it. (Obviously, in a contrived situation like this, I know exactly where the error occurred. For learning purposes, I’ll forget that badPromise injected the error into my chained promises.)

To figure out where the chained promises go awry, I’m going to place a breakpoint on the error handler defined in the call to done in the click handler for the startChain button, as shown in Figure 9.

The Position of the Breakpoint in chainedAsync.html
Figure 9 The Position of the Breakpoint in chainedAsync.html

I run the same test again, and when I return to Visual Studio, the program execution has stopped on the breakpoint. Next, I open the Tasks window (Debug | Windows | Tasks) to see what tasks are currently active. The results are shown in Figure 10.

The Tasks Window in Visual Studio 2013 Showing the Error
Figure 10 The Tasks Window in Visual Studio 2013 Showing the Error

At first, nothing in this window really stands out as having caused the error. The window lists five tasks, all of which are marked as active. As I take a closer look, however, I see that one of the active tasks is the Scheduler queuing up promise errors—and that looks promising (please excuse the bad pun).

(If you’re wondering about the Scheduler, I encourage you to read the next article in this series, where I’ll discuss the new Scheduler API in WinJS.)

When I hover my mouse over that row (ID 120 in Figure 10) in the Tasks window, I get a targeted view of the call stack for that task. I see several error handlers and, lo and behold, badPromise is near the beginning of that call stack. When I double-click that row, Visual Studio takes me right to the line of code in badPromise that raised the error. In a real-world scenario, I’d now diagnose why badPromise was raising an error.

WinJS provides several levels of error handling in an app, above and beyond the reliable try-catch-finally block. A well-performing app should use an appropriate degree of error handling to provide a smooth experience for users. In this article, I demonstrated how to incorporate app-level, page-level and navigation-level error handling into an app. I also demonstrated how to use some of the new tools in Visual Studio 2013 to track down errors in chained promises.

Microsoft Site Templates Upgraded and are now available

 

One of the main goals of the application templates is to provide a demonstration of the application building power in SharePoint and as a potential starting point for larger, more robust applications. While these templates are fully functional and usable out-of-the-box, I’ll be happy to reply on your comments and supporting you as needed.

 

note: those templates were collected from several resources and no source code for them.

All templates are compatible with SharePoint Server 2010 and Foundation Server 2010.

Case Management

The Case Management application template helps case managers track the status and tasks required to complete their work. When a case is created, standard tasks and documents are created which are modified based on the work each case manager has completed.

Clinical Trial Initiation and Management

For those who work in Academic Medical Centers, the Clinical Trial Initiation and Management application template helps teams manage the process of tracking clinical trial protocols, objective setting, subject selection and budget activities.

Employee Activities Site
employee activities
The Employee Activities Site application template helps departments, such as HR and Marketing, manage the creation and attendance of events for employees.

Employee Training Scheduling and Materials

The Employee Training Scheduling and Materials application template helps Instructors add new courses and organize course materials. Employees use the site to schedule attendance at a course, track courses they’ve attended and to provide feedback.

Employee Training 01

Employee Training

Employee Training 03

Absence Request and Vacation Schedule Management

The Absence Request and Vacation Schedule Management application template helps provider departments manage requests for out of office days and provides dashboards showing which users are signed up for a set of responsibilities

Event Planning

The Event Planning application template helps teams organize events efficiently through the use online registration, schedules, communication and feedback.

Discussion Database

The Discussion Database application template provides a location where team members can create and reply to discussion topics.

Team Work Site

The Team Work Site application template provides a place where clinical and business teams, can upload background documents, track scheduled calendar events and submit action items that result from team meetings.

Document Library and Review

The Document Library and Review application template helps people to manage the review cycle common to processes like publication, knowledge management and project plan development.

Knowledgebase

The Knowledgebase application template helps teams manage the information that is resident within their organization. The template enables team members to upload/create documents using Web-based tools and tag them with relevant identifying information.

Policies and Procedures Solution Accelerator

The Policies and Procedures Solution Accelerator assists healthcare organizations create, maintain, publish and easily access policy and procedure information. It also provides the ability to upload documents, maintain a version history and manage tasks.

Board of Directors

The Board of Directors application template provides a single location for an external group of members to store and locate common documents such as quarterly reviews, shareholder meeting notes and annual strategy documents.

Business Performance Reporting

The Business Performance Reporting application template helps health organization managers track the satisfaction of internal customers/patients through a combination of surveys and discussions.

Request for Proposal

The Request for Proposal application template helps manage the process of creating and releasing an initial RFP, collecting submissions of proposals and formally accepting the selected proposal from amongst those submitted.

Compliance Process Support Site

The Compliance Process Support Site application template helps both teams and executive sponsors to manage compliance implementation endeavors, such as HIPAA.

Expense Reimbursement and Approval

The Expense Reimbursement and Approval application template helps manage elements of the expense approval process, including creation and approval. Users can monitor the status of their reimbursement request through a filtered view listing.

Scorecards Solution Accelerator

The Scorecards solution accelerator acts as a template for configuring a management dashboard to track organizational metrics. It contains four example dashboards ranging from a primary care practice to a healthcare organization’s CEO dashboard.

Call Center
call center
The Call Center application template helps departments manage the process of handling customer service requests. The application template helps teams manage service requests from issue identification to cause analysis and resolution.

Help Desk
help desk
The Help Desk application template helps departments manage the process of handling service requests. Team members use the application template to identify a service request, manage identification of the root cause and track solution status.

Physical Asset Tracking and Management

The Physical Asset Tracking and Management application template helps departments, such as Facilities, BioMedical, Surgery, etc. manage requests and the tracking of physical assets.

Inventory Tracking
inventory
The Inventory Tracking application template helps organizations track elements associated with inventory, including creation of inventory. Users are notified when each part reaches the reorder quantity and helps manage customer and supplier information.

Cafeteria Menu Management

The Cafeteria Menu Management application template helps hospital Food & Nutrition staff easily communicate daily menu choices to hospital staff and visitors. It allows staff to develop/schedule menus and provide related nutritional information.

Budgeting and Tracking Multiple Projects

The Budgeting and Tracking Multiple Projects application template helps project teams track and budget multiple, interrelated sets of activities. Management tools such as assignment of new tasks, Gantt Charts and common status designators.

Change Request Management
change request management
The Change Request Management application template helps users track risks associated with a design change. Team members can submit a change request, notifying stakeholders of the risks involved with the change.

IT Team Workspace

The IT Team Workspace application template helps teams manage the development, deployment and support of software projects. It also includes help desk functionality, allowing team members to guide service requests from initiation to resolution.

Project Tracking Workspace

The Project Tracking Workspace application template helps small team projects manage project information in a single location. The application template provides a place where a team can list and view project issues and tasks.

 

 

 

Design Pattern Automation

Software development projects are becoming bigger and more complex every day. The more complex a project the more likely the cost of developing and maintaining the software will far outweigh the hardware cost.

There’s a super-linear relationship between the size of software and the cost of developing and maintaining it. After all, large and complex software requires good engineers to develop and maintain it and good engineers are hard to come by and expensive to keep around.

 

Despite the high total cost of ownership per line of code, a lot of boilerplate code still is written, much of which could be avoided with smarter compilers. Indeed, most boilerplate code stems from repetitive implementation of design patterns. But some of these design patterns are so well-understood they could be implemented automatically if we could teach it to compilers.

Implementing the Observer pattern

Take, for instance, the Observer pattern. This design pattern was identified as early as 1995 and became the base of the successful Model-View-Controller architecture. Elements of this pattern were implemented in the first versions of Java (1995, Observable interface) and .NET (2001, INotifyPropertyChanged interface). Although the interfaces are a part of the framework, they still need to be implemented manually by developers.

The INotifyPropertyChanged interface simply contains one event named PropertyChanged, which needs to be signaled whenever a property of the object is set to a different value.

Let’s have a look at a simple example in .NET:

public Person : INotifyPropertyChanged
{

string firstName, lastName;
public event NotifyPropertyChangedEventHandler PropertyChanged;
protected void OnPropertyChanged(string propertyName)
{
if ( this.PropertyChanged != null ) {
this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}
public string FirstName
{
get { return this.firstName; }
set
{
this.firstName = value;
this.OnPropertyChanged(“FirstName”);
this.OnPropertyChanged(“FullName”);
}
public string LastName
{
get { return this.lastName; }
set {
this.lastName = value;
this.OnPropertyChanged(“LastName”);
this.OnPropertyChanged(“FullName”);
}
public string FullName { get { return string.Format( “{0} {1}“, this.firstName, this.lastName); }}}

Properties eventually depend on a set of fields, and we have to raise the PropertyChanged for a property whenever we change a field that affects it.

Shouldn’t it be possible for the compiler to do this work automatically for us? The long answer is detecting dependencies between fields and properties is a daunting task if we consider all corner cases that can happen: properties can have dependencies on fields of other objects, they can call other methods, or even worse, they can call virtual methods or delegates unknown to the compiler. So, there is no general solution to this problem, at least if we expect compilation times in seconds or minutes and not hours or days. However, in real life, a large share of properties is simple enough to be fully understood by a compiler. So the short answer is, yes, a compiler could generate notification code for more than 90% of all properties in a typical application.

In practice, the same class could be implemented as follows:

[NotifyPropertyChanged]
public Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName { get { return string.Format( “{0} {1}“, this.FirstName, this.LastName); }}
}

This code tells the compiler what to do (implement INotifyPropertyChanged) and not how to do it.

Boilerplate Code is an Anti-Pattern

The Observer (INotifyPropertyChanged) pattern is just one example of pattern that usually causes a lot of boilerplate code in large applications. But a typical source base is full of patterns generating a lot of boilerplate. Even if they are not always recognized as “official” design patterns, they are patterns because they are massively repeating among a code base. The most common causes of code repetition are:

  • Tracing, logging
  • Precondition and invariant checking
  • Authorization and audit
  • Locking and thread dispatching
  • Caching
  • Change tracking (for undo/redo)
  • Transaction handling
  • Exception handling

These features are difficult to encapsulate using normal OO techniques and hence why they’re often implemented using boilerplate code. Is that such a bad thing?

Yes.

Addressing cross-cutting concerns using boilerplate code leads to violation of fundamental principles of good software engineering

  • The Single Responsibility Principle is violated when multiple concerns are being implemented in the same method, such as Validation, Security, INotifyPropertyChanged, and Undo/Redo in a single property setter.
  • The Open/Closed Principle, which states that software entities should be open for extension, but closed for modification, is best respected when new features can be added without modifying the original source code.
  • The Don’t Repeat Yourself principle abhors code repetition coming out of manual implementation of design patterns.
  • The Loose Coupling principle is infringed when a pattern is implemented manually because it is difficult to alter the implementation of this pattern. Note that coupling can occur not only between two components, but also between a component and a conceptual design. Trading a library for another is usually easy if they share the same conceptual design, but adopting a different design requires many more modifications of source code.

Additionally, boilerplate renders your code:

  • Harder to read and reason with when trying to understand what it’s doing to address the functional requirement. This added layer of complexity has a huge bearing on the cost of maintenance considering software maintenance consists of reading code 75% of the time!
  • Larger, which means not only lower productivity, but also higher cost of developing and maintaining the software, not counting a higher risk of introducing bugs.
  • Difficult to refactor and change. Changing a boilerplate (fixing a bug perhaps) requires changing all the places where the boilerplate code had been applied. How do you even accurately identify where the boilerplate is used throughout your codebase which potentially span across many solutions and/or repositories? Find-and-replace…?

If left unchecked, boilerplate code has the nasty habit of growing around your code like vine, taking over more space each time it is applied to a new method until eventually you end up with a large codebase almost entirely covered by boilerplate code. In one of my previous teams, a simple data access layer class had over a thousand lines of code where 90% was boilerplate code to handle different types of SQL exceptions and retries.

I hope by now you see why using boilerplate code is a terrible way to implement patterns. It is actually an anti-pattern to be avoided because it leads to unnecessary complexity, bugs, expensive maintenance, loss of productivity and ultimately, higher software cost.

Design Pattern Automation and Compiler Extensions

In so many cases the struggle with making common boilerplate code reusable stems from the lack of native meta-programming support in mainstream statically typed languages such as C# and Java.

The compiler is in possession of an awful lot of information about our code normally outside our reach. Wouldn’t it be nice if we could benefit from this information and write compiler extensions to help with our design patterns?

A smarter compiler would allow for:

  1. Build-time program transformation: to allow us to add features whilst preserving the code semantics and keeping the complexity and number of lines of code in check, so we can automatically implement parts of a design pattern that can be automated;
  2. Static code validation: for build-time safety to ensure we have used the design pattern correctly or to check parts of a pattern that cannot be automated have been implemented according to a set of predefined rules.

Example: ‘using’ and ‘lock’ keywords in C#

If you want proof design patterns can be supported directly by the compiler, there is no need to look further than the using and lock keywords. At first sight, they are purely redundant in the language. But the designers of the language have recognized their importance and have created a specific keyword for them.

Let’s have a look at the using keyword. The keyword is actually a part of the larger Disposable Pattern, composed of the following participants:

  • Resources Objects are objects consuming any external resource, such as a database connection.
  • Resource Consumers are instruction block or objects that consume Resource Objects during a given lifetime.

The Disposable Pattern is ruled by the following principles:

  1. Resource Objects must implement IDisposable.
  2. Implementation of IDisposable.Dispose must be idempotent, i.e. may be safely called several times.
  3. Resource Objects must have a finalizer (called destructor in C++).
  4. Implementation of IDisposable.Dispose must call GC.SuppressFinalize.
  5. Generally, objects that store Resource Objects into their state (field) are also Resource Objects, and children Resource Objects should be disposed by the parent.
  6. Instruction blocks that allocate and consume a Resource Object should be enclosed with the using keyword (unless the reference to the resource is stored in the object state, see previous point).

As you can see, the Disposable Pattern is richer than it appears at first sight. How is this pattern being automated and enforced?

  • The core .NET library provides the IDisposable interface.
  • The C# compiler provides the using keyword, which automates generation of some source code (a try/finally block).
  • FxCop can enforce a rule that says any disposable class also implements a finalizer, and the Dispose method calls GC.SuppressFinalize.

Therefore, the Disposable Pattern is a perfect example of a design pattern directly supported by the .NET platform.

But what about patterns not intrinsically supported? They can be implemented using a combination of class libraries and compiler extensions. Our next example also comes from Microsoft.

Example: Code Contracts

Checking preconditions (and optionally postconditions and invariants) has long been recognized as a best practice to prevent defects in one component causing symptoms in another component. The idea is:

  • every component (every class, typically) should be designed as a “cell”;
  • every cell is responsible for its own health therefore;
  • every cell should check any input it receives from other cells.

Precondition checking can be considered a design pattern because it is a repeatable solution to a recurring problem.

Microsoft Code Contracts (http://msdn.microsoft.com/en-us/devlabs/dd491992.aspx) is a perfect example of design pattern automation. Based on plain-old C# or Visual Basic, it gives you an API for expressing validation rules in the form of pre-conditions, post-conditions, and object invariants. However, this API is not just a class library. It translates into build-time transformation and validation of your program.

I won’t delve into too much detail on Code Contracts; simply put, it allows you to specify validation rules in code which can be checked at build time as well as at run time. For example:

public Book GetBookById(Guid id)
{
Contract.Requires(id != Guid.Empty);
return Dal.Get<Book>(id);
}

public Author GetAuthorById(Guid id)
{
Contract.Requires(id != Guid.Empty);

return Dal.Get<Author>(id);
}

Its binary rewriter can (based on your configurations) rewrite your built assembly and inject additional code to validate the various conditions that you have specified. If you inspect the transformed code generated by the binary rewriter you will see something along the lines of:

  public Book GetBookById(Guid id)
{
if (__ContractsRuntime.insideContractEvaluation <= 4)
{
try
{
++__ContractsRuntime.insideContractEvaluation;
__ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
Guid.Empty"
);
}
finally
{
--__ContractsRuntime.insideContractEvaluation;
}
}
return Dal.Get<Program.Book>(id);
}
public Author GetAuthorById(Guid id)<
{
if (__ContractsRuntime.insideContractEvaluation <= 4)
{
try
{
++__ContractsRuntime.insideContractEvaluation;
__ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
Guid.Empty"
);
}
finally
{
--__ContractsRuntime.insideContractEvaluation;
}
}
return Dal.Get<Program.Author>(id);
}

For more information on Microsoft Code Contracts, please read Jon Skeet’s excellent InfoQ article here (http://www.infoq.com/articles/code-contracts-csharp).

Whilst compiler extensions such as Code Contracts are great, officially supported extensions usually take years to develop, mature, and stabilize. There are so many different domains, each with its own set of problems, it’s impossible for official extensions to cover them all.

What we need is a generic framework to help automate and enforce design patterns in a disciplined way so we are able to tackle domain-specific problems effectively ourselves.

Generic Framework to Automate and Enforce Design Patterns

It may be tempting to see dynamic languages, open compilers (such as Roslyn), or re-compilers (such as Cecil) as solutions because they expose the very details of abstract syntax tree. However, these technologies operate at an excessive level of abstraction, making it very complex to implement any transformation but the simplest ones.

What we need is a high-level framework for compiler extension, based on the following principles:

1. Provide a set of transformation primitives, for instance:

  • intercepting method calls;
  • executing code before and after method execution;
  • intercepting access to fields, properties, or events;
  • introducing interfaces, methods, properties, or events to an existing class.

2. Provide a way to express where primitives should be applied: it’s good to tell the complier extension you want to intercept some methods, but it’s even better if we know which methods should be intercepted!

3. Primitives must be safely composable

It’s natural to want to be able to apply multiple transformations to the same location(s) in our code, so the framework should give us the ability to compose transformations.

When you’re able to apply multiple transformations simultaneously some transformations might need to occur in a specific order in relation to others. Therefore the ordering of transformations needs to follow a well-defined convention but still allow us to override the default ordering where appropriate.

4. Semantics of enhanced code should not be affected

The transformation mechanism should be unobtrusive and leave the original code unaltered as much as possible whilst at the same time providing capabilities to validate the transformations statically. The framework should not make it too easy to “break” the intent of the source code.

5. Advanced reflection and validation abilities

By definition, a design pattern contains rules defining how it should be implemented. For instance, a locking design pattern may define instance fields can only be accessed from instance methods of the same object. The framework must offer a mechanism to query methods accessing a given field, and a way to emit clean build-time errors.

Aspect-Oriented Programming

Aspect-Oriented Programming (AOP) is a programming paradigm that aims to increase modularity by allowing the separation of concerns.

An aspect is a special kind of class containing code transformations (called advices), code matching rules (barbarically called pointcuts), and code validation rules. Design patterns are typically implemented by one or several aspects. There are several ways to apply aspects to code, which greatly depend on each AOP framework. Custom attributes (annotations in Java) are a convenient way to add aspects to hand-picked elements of code More complex pointcuts can be expressed declaratively using XML (e.g. Microsoft Policy Injection Application Block) or a Domain-Specific Language (e.g. AspectJ or Spring), or programmatically using reflection (e.g. LINQ over System.Reflection with PostSharp).

The weaving process combines advice with the original source code at the specified locations (not less barbarically called joinpoints). It has access to meta-data about the original source code so, for compiled languages such as C# or Java, there is opportunity for the static weaver to perform static analysis to ensure the validity of the advice in relation to the pointcuts where they are applied.

Although aspect-oriented programming and design patterns have been independently conceptualized, AOP is an excellent solution to those who seek to automate design patterns or enforce design rules. Unlike low-level metaprogramming, AOP has been designed according to the principles cited above so anyone, and not only compiler specialists, can implement design patterns.

AOP is a programming paradigm and not a technology. As such, it can be implemented using different approaches. AspectJ, the leading AOP framework for Java, is now implemented directly in the Eclipse Java compiler. In .NET, where compilers are not open-source, AOP is best implemented as a re-compiler, transforming the output of the C# or Visual Basic compiler. The leading tool in .NET is PostSharp (see below). Alternatively, a limited subset of AOP can be achieved using dynamic proxies and service containers, and most dependency injection frameworks are able to offer at least method interception aspects.

Example: Custom Design Patterns with PostSharp

PostSharp is a development tool for the automation and enforcement of design patterns in Microsoft .NET and features the most complete AOP framework for .NET.

To avoid turning this article into a PostSharp tutorial, let’s take a very simple pattern: dispatching of method execution back and forth between a foreground (UI) thread and a background thread. This pattern can be implemented using two simple aspects: one that dispatches a method to the background thread, and another that dispatches it to the foreground thread. Both aspects can be compiled by the free PostSharp Express. Let’s look at the first aspect: BackgroundThreadAttribute.

The generative part of the pattern is simple: we just need to create a Task that executes that method, and schedule execution of that Task.

[Serializable] 
public sealed class BackgroundThreadAttribute : MethodInterceptionAspect
{
    public override void OnInvoke(MethodInterceptionArgs args)
{
Task.Run( args.Proceed );
}
}

The MethodInterceptionArgs class contains information about the context in which the method is invoked, such as the arguments and the return value. With this information, you will be able to invoke the original method, cache its return value, log its input arguments, or just about anything that’s required for your use case.

For the validation part of the pattern, we would like to avoid having the custom attribute applied to methods that have a return value or a parameter passed by reference. If this happens, we would like to emit a build-time error. Therefore, we have to implement the CompileTimeValidate method in our BackgroundThreadAttribute class:

// Check that the method returns 'void', has no out/ref argument.
public override bool CompileTimeValidate( MethodBase method ) {

MethodInfo methodInfo = (MethodInfo) method;

if ( methodInfo.ReturnType != typeof(void) ||
methodInfo.GetParameters().Any( p => p.ParameterType.IsByRef ) )
{
ThreadingMessageSource.Instance.Write( method, SeverityType.Error, "THR006",
method.DeclaringType.Name, method.Name );
return false;
}

return true;
}

The ForegoundThreadAttribute would look similar, using the Dispatcher object in WPF or the BeginInvoke method in WinForms.

The above aspect can be applied just like any other attributes, for example:

[BackgroundThread]
private static void ReadFile(string fileName)
{
DisplayText( File.ReadAll(fileName) );
}
[ForegroundThread] private void DisplayText( string content ) { this.textBox.Text = content;
}

The resulting source code is much cleaner than what we would get by directly using tasks and dispatchers.

One may argue that C# 5.0 addresses the issue better with the async and await keywords. This is correct, and is a good example of the C# team identifying a recurring problem that they decided to address with a design pattern implemented directly in the compiler and in core class libraries. While the .NET developer community had to wait until 2012 for this solution, PostSharp offered one as early as 2006.

How long must the .NET community wait for solutions to other common design patterns, for instance INotifyPropertyChanged? And what about design patterns that are specific to your company’s application framework?

Smarter compilers would allow you to implement your own design patterns, so you would not have to rely on the compiler vendor to improve the productivity of your team.

Downsides of AOP

I hope by now you are convinced that AOP is a viable solution to automate design patterns and enforce good design, but it’s worth bearing in mind that there are several downsides too:

1. Lack of staff preparation

As a paradigm, AOP is not taught in undergraduate programs, and it’s rarely touched at master level. This lack of education has contributed towards a lack of general awareness about AOP amongst the developer community.

Despite being 20 years old, AOP is misperceived as a ‘new’ paradigm which often proves to be the stumbling block for adoption for all but the most adventurous development teams.

Design patterns are almost the same age, but the idea that design patterns can be automated and validated is recent. We cited some meaningful precedencies in this article involving the C# compiler, the .NET class library, and Visual Studio Code Analysis (FxCop), but these precedencies have not been generalized into a general call for design pattern automation.

2. Surprise factor

Because staffs and students alike are not well prepared, there can be an element of surprise when they encounter AOP because the application has additional behaviors that are not directly visible from source code. Note: what is surprising is the intended effect of AOP, that the compiler is doing more than usual, and not any side effect.

There can also be some surprise of an unintended effect, when a bug in the use of an aspect (or in a pointcut) causes the transformation to be applied to unexpected classes and methods. Debugging such errors can be subtle, especially if the developer is not aware that aspects are being applied to the project.

These surprise factors can be addressed by:

  • IDE integration, which helps to visualize (a) which additional features have been applied to the source displayed in the editor and (b) to which elements of code a given aspect has been applied. At time of writing only two AOP frameworks provide correct IDE integration: AspectJ (with the AJDT plug-in for Eclipse) and PostSharp (for Visual Studio).
  • Unit testing by the developer – aspects, as well as the fact that aspects have been applied properly, must be unit tested as any other source code artifact.
  • Not relying on naming conventions when applying aspects to code, but instead relying on structural properties of the code such as type inheritance or custom attributes. Note that this debate is not unique to AOP: convention-based programming has been recently gaining momentum, although it is also subject to surprises.

3. Politics

Use of design pattern automation is generally a politically sensitive issue because it also addresses separation of concerns within a team. Typically, senior developers will select design patterns and implement aspects, and junior developers will use them. Senior developers will write validation rules to ensure hand-written code respects the architecture. The fact that junior developers don’t need to understand the whole code base is actually the intended effect.

This argument is typically delicate to tackle because it takes the point of view of a senior manager, and may injure the pride of junior developers.

Ready-Made Design Pattern Implementation with PostSharp Pattern Libraries

As we’ve seen with the Disposable Pattern, even seemingly simple design patterns can actually require complex code transformation or validation. Some of these transformations and validations are complex but still possible to implement automatically. Others can be too complex for automatic processing and must be done manually.

Fortunately, there are also simple design patterns that can be automated easily by anyone (exception handling, transaction handling, and security) with an AOP framework.

After many years of market experience, the PostSharp team began to provide highly sophisticated and optimized ready-made implementations of the most common design patterns after they realized most customers were implementing the same aspects over and over again.

PostSharp currently provides ready-made implementations for the following design patterns:

  • Multithreading: reader-writer-synchronized threading model, actor threading model, thread-exclusive threading model, thread dispatching;
  • Diagnostics: high-performance and detailed logging to a variety of back-ends including NLog and Log4Net;
  • INotifyPropertyChanged: including support for composite properties and dependencies on other objects;
  • Contracts: validation of parameters, fields, and properties.

Now, with ready-made implementations of design patterns, teams can start enjoying the benefits of AOP without learning AOP.

Summary

So-called high-level languages such as Java and C# still force developers to write code at an irrelevant level of abstraction. Because of the limitations of mainstream compilers, developers are forced to write a lot of boilerplate code, adding to the cost of developing and maintaining applications. Boilerplate stems from massive implementation of patterns by hand, in what may be the largest use of copy-paste inheritance in the industry.

The inability to automate design pattern implementation probably costs billions to the software industry, not even counting the opportunity cost of having qualified software engineers spending their time on infrastructure issues instead of adding business value.

However, a large amount of boilerplate could be removed if we had smarter compilers to allow us to automate implementation of the most common patterns. Hopefully, future language designers will understand design patterns are first-class citizens of modern application development, and should have appropriate support in the compiler.

But actually, there is no need to wait for new compilers. They already exist, and are mature. Aspect-oriented programming was specifically designed to address the issue of boilerplate code. Both AspectJ and PostSharp are mature implementations of these concepts, and are used by the largest companies in the world. And both PostSharp and Spring Roo provide ready-made implementations of the most common patterns. As always, early adopters can get productivity gains several years before the masses follow.