Category Archives: C#

How To : Add a Promoted Links Web Part to SharePoint 2013 App Default page

This article helps you to add Promoted links web part to your default app page as the following figure:

 

To do this follow the following steps:
Open the shortcut menu for the project, and then choose Add, New Item
Add Picture Textbox, and two buttons to infopath form

 

In the Templates pane, choose the List template, and then choose the Add button :

Enter list name and choose the Create a non-customizable list based on an existing list type of option button, and then, in its list, choose Promoted links, and then choose the Finish button

Binding the CAPTCHA image
In Solution Explorer, under the list instance node, open the Elements.xml file.
Add the promoted links items as the following:
<?versionencodingutf-8?>
Elementsxmlnshttp://schemas.microsoft.com/sharepoint/
ListInstanceTitleMyPromotedLinks
OnQuickLaunch
TemplateType
FeatureId192efa95-e50c-475e-87ab-361cede5dd7f
Lists/MyPromotedLinks
DescriptionMy List Instance
FieldTitleTwitter</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/twitter.png
FieldDescriptionMuawiyah Shannak Twitter
FieldLinkLocationhttps://twitter.com/MuShannak</Field
FieldOrder</Field
</
FieldTitle</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/blogger.png
FieldDescriptionMuawiyah Shannak Blog
FieldLinkLocationhttp://mushannak.blogspot.com</Field
FieldOrder</Field
</
FieldTitleLinkedin</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/linkedin.png
FieldDescriptionMuawiyah Shannak Linkedin
FieldLinkLocationhttp://ae.linkedin.com/in/shannak</Field
FieldOrder</Field
</
</
</
<!–ListInstance
</Elements
In Solution Explorer, under the Pages node, open the Default.aspx file. Add following tags inside the PlaceHolderMain Place Holder:
WebPartPagesWebPartZone=”WebPartZone”runat=”server”FrameType=”None”>
WebPartPagesXsltListViewWebPart=”XsltListViewAppPromotedList”
runat=”server”ListUrl=”Lists/MyPromotedLinks”IsIncluded=”True”
NoDefaultStyle=”TRUE”Title=”Images used in switcher”
PageType=”PAGE_NORMALVIEW”Default=”False”
ViewContentTypeId=”0x”>
</WebPartPagesXsltListViewWebPart
</WebPartPagesWebPartZone

Deploy a solution and you will find nice promoted links web part in the app default page!

How To : SharePoint Cross-site Publishing and Free code for Web Part

Cross-site publishing is one of the powerful new capabilities in SharePoint 2013.  It enables the separation of data entry from display and breaks down the container barriers that have traditionally existed in SharePoint (ex: rolling up information across site collections). 

 cross-site-publishing

Cross-site publishing is delivered through search and a number of new features, including list/library catalogs, catalog connections, and the content search web part.  Unfortunately, SharePoint Online/Office 365 doesn’t currently support these features.  Until they are added to the service (possibly in a quarterly update), customers will be looking for alternatives to close the gap.  In this post, I will outline several alternatives for delivering cross-site and search-driven content in SharePoint Online and how to template these views for reuse

I’m a huge proponent of SharePoint Online.  After visiting several Microsoft data centers, I feel confident that Microsoft is better positioned to run SharePoint infrastructure than almost any organization in the world.  SharePoint Online has very close feature parity to SharePoint on-premise, with the primary gaps existing in cross-site publishing and advanced business intelligence.  Although these capabilities have acceptable alternatives in the cloud (as will be outlined in this post), organizations looking to maximize the cloud might consider SharePoint running in IaaS for immediate access to these features.

 

Apps for SharePoint

The new SharePoint app model is fully supported in SharePoint Online and can be used to deliver customizations to SharePoint using any web technology.  New SharePoint APIs can be used with the app model to deliver an experience similar to cross-site publishing.  In fact, the content search web part could be re-written for delivery through the app model as an “App Part” for SharePoint Online. 
Although the app model provides great flexibility and reuse, it does come with some drawbacks.  Because an app part is delivered through a glorified IFRAME, it would be challenging to navigate to a new page from within the app part.  A link within the app would only navigate within the IFRAME (not the parent of the IFRAME).  Secondly, there isn’t a great mechanism for templating a site to automatically leverage an app part on its page(s).  Apps do not work with site templates, so a site that contains an app cannot be saved as a template.  Apps can be “stapled” to sites, but the app installed event (which would be needed to add the app part to a page) only fires when the app is installed into the app catalog.

REST APIs and Script Editor

The script editor web part is a powerful new tool that can help deliver flexible customization into SharePoint Online.  The script editor web part allows a block of client-side script to be added to any wiki or web part page in a site.  Combined with the new SharePoint REST APIs, the script editor web part can deliver mash-ups very similar to cross-site publishing and the content search web part.  Unlike apps for SharePoint, the script editor isn’t constrained by IFRAME containers, app permissions, or templating limitations.  In fact, a well-configured script editor web part could be exported and re-imported into the web part gallery for reuse.

Cross-site publishing leverages “catalogs” for precise querying of specific content.  Any List/Library can be designated as a catalog.  By making this designation, SharePoint will automatically create managed properties for columns of the List/Library and ultimately generate a search result source in sites that consume the catalog.  Although SharePoint Online doesn’t support catalogs, it support the building blocks such as managed properties and result sources.  These can be manually configured to provide the same precise querying in SharePoint Online and exploited in the script editor web part for display.

Calling Search REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “search/query?Querytext=’ContentType:News'”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

An easier approach might be to directly reference a list/library in the REST call of our client-side script.  This wouldn’t require manual search configuration and would provide real-time publishing (no waiting for new items to get indexed).  You could think of this approach similar to a content by query web part across site collections (possibly even farms) and the REST API makes it all possible!

List REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

The content search web part uses display templates to render search results in different arrangements (ex: list with images, image carousel, etc).  There are two types of display templates the content search web part leverages…the control template, which renders the container around the items, and the item template, which renders each individual item in the search results.  This is very similar to the way a Repeater control works in ASP.NET.  Display templates are authored using HTML, but are converted to client-side script automatically by SharePoint for rendering.  I mention this because our approach is very similar…we will leverage a container and then loop through and render items in script.  In fact, all the examples in this post were converted from display templates in a public site I’m working on. 

Item display template for content search web part

<!–#_
var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId() + “_ImageTitle_”);
var rem = index % 3;
var even = true;
if (rem == 1)
    even = false;

var pictureURL = $getItemValue(ctx, “Picture URL”);
var pictureId = encodedId + “picture”;
var pictureMarkup = Srch.ContentBySearch.getPictureMarkup(pictureURL, 140, 90, ctx.CurrentItem, “mtcImg140”, line1, pictureId);
var pictureLinkId = encodedId + “pictureLink”;
var pictureContainerId = encodedId + “pictureContainer”;
var dataContainerId = encodedId + “dataContainer”;
var dataContainerOverlayId = encodedId + “dataContainerOverlay”;
var line1LinkId = encodedId + “line1Link”;
var line1Id = encodedId + “line1”;
 _#–>
<div style=”width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;”>
   <a href=”_#= linkURL =#_”>
      <div style=”float: left; width: 140px; padding-right: 10px;”>
         <img src=”_#= pictureURL =#_” class=”mtcImg140″ style=”width: 140px;” />
      </div>
      <div style=”float: left; width: 170px”>
         <div class=”mtcProfileHeader mtcProfileHeaderP”>_#= line1 =#_</div>
      </div>
   </a>
</div>

 

Script equivalent

<div id=”divUnfeaturedNews”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //get the details for each item
                var listData = data.d.results;
                var itemCount = listData.length;
                var processedCount = 0;
                var ul = $(“<ul style=’list-style-type: none; padding-left: 0px;’ class=’cbs-List’>”);
                for (i = 0; i < listData.length; i++) {
                    $.ajax({
                        url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
                        type: “GET”,
                        headers: { “Accept”: “application/json;odata=verbose” },
                        success: function (data) {
                            processedCount++;
                            var htmlStr = “<li style=’display: inline;’><div style=’width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;’>”;
                            htmlStr += “<a href=’#’>”;
                            htmlStr += “<div style=’float: left; width: 140px; padding-right: 10px;’>”;
                            htmlStr += setImageWidth(data.d.PublishingRollupImage, ‘140’);
                            htmlStr += “</div>”;
                            htmlStr += “<div style=’float: left; width: 170px’>”;
                            htmlStr += “<div class=’mtcProfileHeader mtcProfileHeaderP’>” + data.d.Title + “</div>”;
                            htmlStr += “</div></a></div></li>”;
                            ul.append($(htmlStr))
                            if (processedCount == itemCount) {
                                $(“#divUnfeaturedNews”).append(ul);
                            }
                        },
                        error: function (data) {
                            alert(data.statusText);
                        }
                    });
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });

    function setImageWidth(imgString, width) {
        var img = $(imgString);
        img.css(‘width’, width);
        return img[0].outerHTML;
    }
</script>

 

Even one of the more complex carousel views from my site took less than 30min to convert to the script editor approach.

Advanced carousel script

<div id=”divFeaturedNews”>
    <div class=”mtc-Slideshow” id=”divSlideShow” style=”width: 610px;”>
        <div style=”width: 100%; float: left;”>
            <div id=”divSlideShowSection”>
                <div style=”width: 100%;”>
                    <div class=”mtc-SlideshowItems” id=”divSlideShowSectionContainer” style=”width: 610px; height: 275px; float: left; border-style: none; overflow: hidden; position: relative;”>
                        <div id=”divFeaturedNewsItemContainer”>
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 1&$top=4”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                var listData = data.d.results;
                for (i = 0; i < listData.length; i++) {
                    getItemDetails(listData, i, listData.length);
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });
    var processCount = 0;
    function getItemDetails(listData, i, count) {
        $.ajax({
            url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                processCount++;
                var itemHtml = “<div class=’mtcItems’ id=’divPic_” + i + “‘ style=’width: 610px; height: 275px; float: left; position: absolute; border-bottom: 1px dotted #ababab; z-index: 1; left: 0px;’>”
                itemHtml += “<div id=’container_” + i + “‘ style=’width: 610px; height: 275px; float: left;’>”;
                itemHtml += “<a href=’#’ title='” + data.d.Caption_x005f_x0020_x005f_Title + “‘ style=’width: 610px; height: 275px;’>”;
                itemHtml += data.d.Feature_x005f_x0020_x005f_Image;
                itemHtml += “</a></div></div>”;
                itemHtml += “<div class=’titleContainerClass’ id=’divTitle_” + i + “‘ data-originalidx='” + i + “‘ data-currentidx='” + i + “‘ style=’height: 25px; z-index: 2; position: absolute; background-color: rgba(255, 255, 255, 0.8); cursor: pointer; padding-right: 10px; margin: 0px; padding-left: 10px; margin-top: 4px; color: #000; font-size: 18px;’ onclick=’changeSlide(this);’>”;
                itemHtml += data.d.Caption_x005f_x0020_x005f_Title;
                itemHtml += “<span id=’currentSpan_” + i + “‘ style=’display: none; font-size: 16px;’>” + data.d.Caption_x005f_x0020_x005f_Body + “</span></div>”;
                $(‘#divFeaturedNewsItemContainer’).append(itemHtml);

                if (processCount == count) {
                    allItemsLoaded();
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    }
    window.mtc_init = function (controlDiv) {
        var slideItems = controlDiv.children;
        for (var i = 0; i < slideItems.length; i++) {
            if (i > 0) {
                slideItems[i].style.left = ‘610px’;
            }
        };
    };

    function allItemsLoaded() {
        var slideshows = document.querySelectorAll(“.mtc-SlideshowItems”);
        for (var i = 0; i < slideshows.length; i++) {
            mtc_init(slideshows[i].children[0]);
        }

        var div = $(‘#divTitle_0’);
        cssTitle(div, true);
        var top = 160;
        for (i = 1; i < 4; i++) {
            var divx = $(‘#divTitle_’ + i);
            cssTitle(divx, false);
            divx.css(‘top’, top);
            top += 35;
        }
    }

 

bottlenecks[1]

 

    function cssTitle(div, selected) {
        if (selected) {
            div.css(‘height’, ‘auto’);
            div.css(‘width’, ‘300px’);
            div.css(‘top’, ’10px’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’26px’);
            div.css(‘padding-top’, ‘5px’);
            div.css(‘padding-bottom’, ‘5px’);
            div.find(‘span’).css(‘display’, ‘block’);
        }
        else {
            div.css(‘height’, ’25px’);
            div.css(‘width’, ‘auto’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’18px’);
            div.css(‘padding-top’, ‘0px’);
            div.css(‘padding-bottom’, ‘0px’);
            div.find(‘span’).css(‘display’, ‘none’);
        }
    }

    window.changeSlide = function (item) {
        //get all title containers
        var listItems = document.querySelectorAll(‘.titleContainerClass’);
        var currentIndexVals = { 0: null, 1: null, 2: null, 3: null };
        var newIndexVals = { 0: null, 1: null, 2: null, 3: null };

        for (var i = 0; i < listItems.length; i++) {
            //current Index
            currentIndexVals[i] = parseInt(listItems[i].getAttribute(‘data-currentidx’));
        }

        var selectedIndex = 0; //selected Index will always be 0
        var leftOffset = ”;
        var originalSelectedIndex = ”;

        var nextSelected = ”;
        var originalNextIndex = ”;

        if (item == null) {
            var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
            originalSelectedIndex = parseInt(item0.getAttribute(‘data-originalidx’));
            originalNextIndex = originalSelectedIndex + 1;
            nextSelected = currentIndexVals[0] + 1;
        }
        else {
            nextSelected = item.getAttribute(‘data-currentidx’);
            originalNextIndex = item.getAttribute(‘data-originalidx’);
        }

        if (nextSelected == 0) { return; }

        for (i = 0; i < listItems.length; i++) {
            if (currentIndexVals[i] == selectedIndex) {
                //this is the selected item, so move to bottom and animate
                var div = $(‘[data-currentidx=”0″]’);
                cssTitle(div, false);
                div.css(‘left’, ‘-400px’);
                div.css(‘top’, ‘230px’);

                newIndexVals[i] = 3;
                var item0 = document.querySelector(‘[data-currentidx=”0″]’);
                originalSelectedIndex = item0.getAttribute(‘data-originalidx’);

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else if (currentIndexVals[i] == nextSelected) {
                //this is the NEW selected item, so resize and slide in as selected
                var div = $(‘[data-currentidx=”‘ + nextSelected + ‘”]’);
                cssTitle(div, true);
                div.css(‘left’, ‘-610px’);

                newIndexVals[i] = 0;

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else {
                //move up in queue
                var curIdx = currentIndexVals[i];
                var div = $(‘[data-currentidx=”‘ + curIdx + ‘”]’);

                var topStr = div.css(‘top’);
                var topInt = parseInt(topStr.substring(0, topStr.length – 1));

                if (curIdx != 1 && nextSelected == 1 || curIdx > nextSelected) {
                    topInt = topInt – 35;
                    if (curIdx – 1 == 2) { newIndexVals[i] = 2 };
                    if (curIdx – 1 == 1) { newIndexVals[i] = 1 };
                }

                //move up
                div.animate(
                    { top: topInt }, 500, function () {
                    });
            }
        };

        if (originalNextIndex < 0)
            originalNextIndex = itemCount – 1;

        //adjust pictures
        $(‘#divPic_’ + originalNextIndex).css(‘left’, ‘610px’);
        leftOffset = ‘-610px’;

        $(‘#divPic_’ + originalSelectedIndex).animate(
            { left: leftOffset }, 500, function () {
            });

        $(‘#divPic_’ + originalNextIndex).animate(
            { left: ‘0px’ }, 500, function () {
            });

        var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
        var item1 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[1] + ‘”]’);
        var item2 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[2] + ‘”]’);
        var item3 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[3] + ‘”]’);
        if (newIndexVals[0] != null) { item0.setAttribute(‘data-currentidx’, newIndexVals[0]) };
        if (newIndexVals[1] != null) { item1.setAttribute(‘data-currentidx’, newIndexVals[1]) };
        if (newIndexVals[2] != null) { item2.setAttribute(‘data-currentidx’, newIndexVals[2]) };
        if (newIndexVals[3] != null) { item3.setAttribute(‘data-currentidx’, newIndexVals[3]) };
    };
</script>

 

End-result of script editors in SharePoint Online

Separate authoring site collection

Final Thoughts

A Look at : ‘Kaizan” and its Philospophy in Kanban

kaizen%20not%20kaizan[1]

Kaizen, or rapid improvement processes, often is considered to be the “building block” of all lean production methods. Kaizen focuses on eliminating waste, improving productivity, and achieving sustained continual improvement in targeted activities and processes of an organization.

Lean production is founded on the idea of kaizen – or continual improvement. This philosophy implies that small, incremental changes routinely applied and sustained over a long period result in significant improvements. The kaizen strategy aims to involve workers from multiple functions and levels in the organization in working together to address a problem or improve a process.

The team uses analytical techniques, such as value stream mapping and “the 5 whys”, to identify opportunities quickly to eliminate waste in a targeted process or production area. The team works to implement chosen improvements rapidly (often within 72 hours of initiating the kaizen event), typically focusing on solutions that do not involve large capital outlays.

Periodic follow-up events aim to ensure that the improvements from the kaizen “blitz” are sustained over time. Kaizen can be used as an analytical method for implementing several other lean methods, including conversions to cellular manufacturing and just-in-time production systems.

Top of page

Method and Implementation Approach

Rapid continual improvement processes typically require an organization to foster a culture where employees are empowered to identify and solve problems. Most organizations implementing kaizen-type improvement processes have established methods and ground rules that are well communicated in the organization and reinforced through training. The basic steps for implementing a kaizen “event” are outlined below, although organizations typically adapt and sequence these activities to work effectively in their unique circumstances.

Phase 1: Planning and Preparation. The first challenge is to identify an appropriate target area for a rapid improvement event. Such areas might include: areas with substantial work-in-progress; an administrative process or production area where significant bottlenecks or delays occur; areas where everything is a “mess” and/or quality or performance does not meet customer expectations; and/or areas that have significant market or financial impact (i.e., the most “value added” activities).

Once a suitable production process, administrative process, or area in a factory is selected, a more specific “waste elimination” problem within that area is chosen for the focus of the kaizen event ( i.e., the specific problem that needs improvement, such as lead time reduction, quality improvement, or production yield improvement). Once the problem area is chosen, managers typically assemble a cross-functional team of employees.

It is important for teams to involve workers from the targeted administrative or production process area, although individuals with “fresh perspectives” may sometimes supplement the team. Team members should all be familiar with the organization’s rapid improvement process or receive training on it prior to the “event”. Kaizen events are generally organized to last between one day and seven days, depending on the scale of the targeted process and problem. Team members are expected to shed most of their operational responsibilities during this period, so that they can focus on the kaizen event.

Phase 2: Implementation. The team first works to develop a clear understanding of the “current state” of the targeted process so that all team members have a similar understanding of the problem they are working to solve. Two techniques are commonly used to define the current state and identify manufacturing wastes:

  • Five Whys. Toyota developed the practice of asking “why” five times and answering it each time to uncover the root cause of a problem. An example is shown below.Repeating “Why” Five Times1
    1. Why did the machine stop?
      There was an overload, and the fuse blew.
    2. Why was there an overload?
      The bearing was not sufficiently lubricated.
    3. Why was it not lubricated sufficiently?
      The lubrication pump was not pumping sufficiently.
    4. Why was it not pumping sufficiently?
      The shaft of the pump was worn and rattling.
    5. Why was the shaft worn out?
      There was no strainer attached, and metal scrap got in.
  • Value Stream Mapping. This technique involves flowcharting the steps, activities, material flows, communications, and other process elements that are involved with a process or transformation (e.g., transformation of raw materials into a finished product, completion of an administrative process). Value stream mapping helps an organization identify the non-value-adding elements in a targeted process. This technique is similar to process mapping, which is frequently used to support pollution prevention planning in organizations. In some cases, value stream mapping can be used in phase 1 to identify areas for which to target kaizen events.

During the kaizen event, it is typically necessary to collect information on the targeted process, such as measurements of overall product quality; scrap rate and source of scrap; a routing of products; total product distance traveled; total square feet occupied by necessary equipment; number and frequency of changeovers; source of bottlenecks; amount of work-in-progress; and amount of staffing for specific tasks. Team members are assigned specific roles for research and analysis. As more information is gathered, team members add detail to value stream maps of the process and conduct time studies of relevant operations (e.g., takt time, lead-time).

Once data is gathered, it is analyzed and assessed to find areas for improvement. Team members identify and record all observed waste, by asking what the goal of the process is and whether each step or element adds value towards meeting this goal. Once waste, or non-value added activity, is identified and measured, team members then brainstorm improvement options. Ideas are often tested on the shopfloor or in process “mock-ups”. Ideas deemed most promising are selected and implemented. To fully realize the benefits of the kaizen event, team members should observe and record new cycle times, and calculate overall savings from eliminated waste, operator motion, part conveyance, square footage utilized, and throughput time.

Phase 3: Follow-up. A key part of a kaizen event is the follow-up activity that aims to ensure that improvements are sustained, and not just temporary. Following the kaizen event, team members routinely track key performance measures (i.e., metrics) to document the improvement gains. Metrics often include lead and cycle times, process defect rates, movement required, square footage utilized, although the metrics vary when the targeted process is an administrative process. Follow-up events are sometimes scheduled at 30 and 90-days following the initial kaizen event to assess performance and identify follow-up modifications that may be necessary to sustain the improvements. As part of this follow-up, personnel involved in the targeted process are tapped for feedback and suggestions. As discussed under the 5S method, visual feedback on process performance are often logged on scoreboards that are visible to all employees.

Top of page

Implications for Environmental Performance

Potential Benefits:
At its core, kaizen represents a process of continuous improvement that creates a sustained focus on eliminating all forms of waste from a targeted process. The resulting continual improvement culture and process is typically very similar to those sought under environmental management systems (EMS), ISO 14001, and pollution prevention programs. An advantage of kaizen is that it involves workers from multiple functions who may have a role in a given process, and strongly encourages them to participate in waste reduction activities. Workers close to a particular process often have suggestions and insights that can be tapped about ways to improve the process and reduce waste. Organizations have found, however, that it is often difficult to sustain employee involvement and commitment to continual improvement activities (e.g., P2, environmental management) that are not necessarily perceived to be directly related to core operations. In some cases, kaizen may provide a vehicle for engaging broad-based organizational participation in continual improvement activities that target, in part, physical wastes and environmental impacts.
Kaizen can be a powerful tool for uncovering hidden wastes or waste-generating activities and eliminating them.
Kaizen focuses on waste elimination activities that optimize existing processes and that can be accomplished quickly without significant capital investment. This creates a higher likelihood of quick, sustained results.
Potential Shortcomings:
Failure to involve environmental personnel in a quick kaizen event can potentially result in changes that do not satisfy applicable environmental regulatory requirements (e.g., waste handling requirements, permitting requirements). Care should be taken to consult with environmental staff regarding changes made to environmentally sensitive processes.
Failure to incorporate environmental considerations into kaizen can potentially result in solutions that do not consider inherent environmental risk associated with new processes. For example, an organization might select a change in process chemistry that addresses one improvement need (e.g., product quality, process cycle time) but that might be sub-optimal if the organization considered the material hazards or toxicity and the associated chemical and hazardous waste management obligations.
By not explicitly incorporating environmental considerations into kaizen, valuable pollution prevention and sustainability opportunities may be disregarded. For example, an evident opportunity to conserve water resources may not be explored if water use is not expensive and therefore not considered a wasteful expense that needs to be addressed. Similarly, including environmental considerations in the kaizen event goals can lead to solutions that rely less on hazardous materials or that create less hazardous wastes.

Useful Resources

Productivity Press Development Team. Kaizen for the Shopfloor (Portland, Oregon: Productivity Press, 2002).

Soltero, Conrad and Gregory Waldrip. “Using Kaizen to Reduce Waste and Prevent Pollution.” Environmental Quality Management (Spring 2002), 23-37.

Free Code to Create Cross-site Publishing Apps for SharePoint Online

Cross-site publishing is one of the powerful new capabilities in SharePoint 2013.  It enables the separation of data entry from display and breaks down the container barriers that have traditionally existed in SharePoint (ex: rolling up information across site collections). 

 IC648720[1]

Cross-site publishing is delivered through search and a number of new features, including list/library catalogs, catalog connections, and the content search web part.  Unfortunately, SharePoint Online/Office 365 doesn’t currently support these features.  Until they are added to the service (possibly in a quarterly update), customers will be looking for alternatives to close the gap.  In this post, I will outline several alternatives for delivering cross-site and search-driven content in SharePoint Online and how to template these views for reuse

I’m a huge proponent of SharePoint Online.  After visiting several Microsoft data centers, I feel confident that Microsoft is better positioned to run SharePoint infrastructure than almost any organization in the world.  SharePoint Online has very close feature parity to SharePoint on-premise, with the primary gaps existing in cross-site publishing and advanced business intelligence.  Although these capabilities have acceptable alternatives in the cloud (as will be outlined in this post), organizations looking to maximize the cloud might consider SharePoint running in IaaS for immediate access to these features.

 

Apps for SharePoint

The new SharePoint app model is fully supported in SharePoint Online and can be used to deliver customizations to SharePoint using any web technology.  New SharePoint APIs can be used with the app model to deliver an experience similar to cross-site publishing.  In fact, the content search web part could be re-written for delivery through the app model as an “App Part” for SharePoint Online. 
Although the app model provides great flexibility and reuse, it does come with some drawbacks.  Because an app part is delivered through a glorified IFRAME, it would be challenging to navigate to a new page from within the app part.  A link within the app would only navigate within the IFRAME (not the parent of the IFRAME).  Secondly, there isn’t a great mechanism for templating a site to automatically leverage an app part on its page(s).  Apps do not work with site templates, so a site that contains an app cannot be saved as a template.  Apps can be “stapled” to sites, but the app installed event (which would be needed to add the app part to a page) only fires when the app is installed into the app catalog.

REST APIs and Script Editor

The script editor web part is a powerful new tool that can help deliver flexible customization into SharePoint Online.  The script editor web part allows a block of client-side script to be added to any wiki or web part page in a site.  Combined with the new SharePoint REST APIs, the script editor web part can deliver mash-ups very similar to cross-site publishing and the content search web part.  Unlike apps for SharePoint, the script editor isn’t constrained by IFRAME containers, app permissions, or templating limitations.  In fact, a well-configured script editor web part could be exported and re-imported into the web part gallery for reuse.

Cross-site publishing leverages “catalogs” for precise querying of specific content.  Any List/Library can be designated as a catalog.  By making this designation, SharePoint will automatically create managed properties for columns of the List/Library and ultimately generate a search result source in sites that consume the catalog.  Although SharePoint Online doesn’t support catalogs, it support the building blocks such as managed properties and result sources.  These can be manually configured to provide the same precise querying in SharePoint Online and exploited in the script editor web part for display.

Calling Search REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “search/query?Querytext=’ContentType:News'”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

An easier approach might be to directly reference a list/library in the REST call of our client-side script.  This wouldn’t require manual search configuration and would provide real-time publishing (no waiting for new items to get indexed).  You could think of this approach similar to a content by query web part across site collections (possibly even farms) and the REST API makes it all possible!

List REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

The content search web part uses display templates to render search results in different arrangements (ex: list with images, image carousel, etc).  There are two types of display templates the content search web part leverages…the control template, which renders the container around the items, and the item template, which renders each individual item in the search results.  This is very similar to the way a Repeater control works in ASP.NET.  Display templates are authored using HTML, but are converted to client-side script automatically by SharePoint for rendering.  I mention this because our approach is very similar…we will leverage a container and then loop through and render items in script.  In fact, all the examples in this post were converted from display templates in a public site I’m working on. 

Item display template for content search web part

<!–#_
var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId() + “_ImageTitle_”);
var rem = index % 3;
var even = true;
if (rem == 1)
    even = false;

var pictureURL = $getItemValue(ctx, “Picture URL”);
var pictureId = encodedId + “picture”;
var pictureMarkup = Srch.ContentBySearch.getPictureMarkup(pictureURL, 140, 90, ctx.CurrentItem, “mtcImg140”, line1, pictureId);
var pictureLinkId = encodedId + “pictureLink”;
var pictureContainerId = encodedId + “pictureContainer”;
var dataContainerId = encodedId + “dataContainer”;
var dataContainerOverlayId = encodedId + “dataContainerOverlay”;
var line1LinkId = encodedId + “line1Link”;
var line1Id = encodedId + “line1”;
 _#–>
<div style=”width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;”>
   <a href=”_#= linkURL =#_”>
      <div style=”float: left; width: 140px; padding-right: 10px;”>
         <img src=”_#= pictureURL =#_” class=”mtcImg140″ style=”width: 140px;” />
      </div>
      <div style=”float: left; width: 170px”>
         <div class=”mtcProfileHeader mtcProfileHeaderP”>_#= line1 =#_</div>
      </div>
   </a>
</div>

 

Script equivalent

<div id=”divUnfeaturedNews”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //get the details for each item
                var listData = data.d.results;
                var itemCount = listData.length;
                var processedCount = 0;
                var ul = $(“<ul style=’list-style-type: none; padding-left: 0px;’ class=’cbs-List’>”);
                for (i = 0; i < listData.length; i++) {
                    $.ajax({
                        url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
                        type: “GET”,
                        headers: { “Accept”: “application/json;odata=verbose” },
                        success: function (data) {
                            processedCount++;
                            var htmlStr = “<li style=’display: inline;’><div style=’width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;’>”;
                            htmlStr += “<a href=’#’>”;
                            htmlStr += “<div style=’float: left; width: 140px; padding-right: 10px;’>”;
                            htmlStr += setImageWidth(data.d.PublishingRollupImage, ‘140’);
                            htmlStr += “</div>”;
                            htmlStr += “<div style=’float: left; width: 170px’>”;
                            htmlStr += “<div class=’mtcProfileHeader mtcProfileHeaderP’>” + data.d.Title + “</div>”;
                            htmlStr += “</div></a></div></li>”;
                            ul.append($(htmlStr))
                            if (processedCount == itemCount) {
                                $(“#divUnfeaturedNews”).append(ul);
                            }
                        },
                        error: function (data) {
                            alert(data.statusText);
                        }
                    });
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });

    function setImageWidth(imgString, width) {
        var img = $(imgString);
        img.css(‘width’, width);
        return img[0].outerHTML;
    }
</script>

 

Even one of the more complex carousel views from my site took less than 30min to convert to the script editor approach.

Advanced carousel script

<div id=”divFeaturedNews”>
    <div class=”mtc-Slideshow” id=”divSlideShow” style=”width: 610px;”>
        <div style=”width: 100%; float: left;”>
            <div id=”divSlideShowSection”>
                <div style=”width: 100%;”>
                    <div class=”mtc-SlideshowItems” id=”divSlideShowSectionContainer” style=”width: 610px; height: 275px; float: left; border-style: none; overflow: hidden; position: relative;”>
                        <div id=”divFeaturedNewsItemContainer”>
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 1&$top=4”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                var listData = data.d.results;
                for (i = 0; i < listData.length; i++) {
                    getItemDetails(listData, i, listData.length);
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });
    var processCount = 0;
    function getItemDetails(listData, i, count) {
        $.ajax({
            url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                processCount++;
                var itemHtml = “<div class=’mtcItems’ id=’divPic_” + i + “‘ style=’width: 610px; height: 275px; float: left; position: absolute; border-bottom: 1px dotted #ababab; z-index: 1; left: 0px;’>”
                itemHtml += “<div id=’container_” + i + “‘ style=’width: 610px; height: 275px; float: left;’>”;
                itemHtml += “<a href=’#’ title='” + data.d.Caption_x005f_x0020_x005f_Title + “‘ style=’width: 610px; height: 275px;’>”;
                itemHtml += data.d.Feature_x005f_x0020_x005f_Image;
                itemHtml += “</a></div></div>”;
                itemHtml += “<div class=’titleContainerClass’ id=’divTitle_” + i + “‘ data-originalidx='” + i + “‘ data-currentidx='” + i + “‘ style=’height: 25px; z-index: 2; position: absolute; background-color: rgba(255, 255, 255, 0.8); cursor: pointer; padding-right: 10px; margin: 0px; padding-left: 10px; margin-top: 4px; color: #000; font-size: 18px;’ onclick=’changeSlide(this);’>”;
                itemHtml += data.d.Caption_x005f_x0020_x005f_Title;
                itemHtml += “<span id=’currentSpan_” + i + “‘ style=’display: none; font-size: 16px;’>” + data.d.Caption_x005f_x0020_x005f_Body + “</span></div>”;
                $(‘#divFeaturedNewsItemContainer’).append(itemHtml);

                if (processCount == count) {
                    allItemsLoaded();
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    }
    window.mtc_init = function (controlDiv) {
        var slideItems = controlDiv.children;
        for (var i = 0; i < slideItems.length; i++) {
            if (i > 0) {
                slideItems[i].style.left = ‘610px’;
            }
        };
    };

    function allItemsLoaded() {
        var slideshows = document.querySelectorAll(“.mtc-SlideshowItems”);
        for (var i = 0; i < slideshows.length; i++) {
            mtc_init(slideshows[i].children[0]);
        }

        var div = $(‘#divTitle_0’);
        cssTitle(div, true);
        var top = 160;
        for (i = 1; i < 4; i++) {
            var divx = $(‘#divTitle_’ + i);
            cssTitle(divx, false);
            divx.css(‘top’, top);
            top += 35;
        }
    }

    function cssTitle(div, selected) {
        if (selected) {
            div.css(‘height’, ‘auto’);
            div.css(‘width’, ‘300px’);
            div.css(‘top’, ’10px’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’26px’);
            div.css(‘padding-top’, ‘5px’);
            div.css(‘padding-bottom’, ‘5px’);
            div.find(‘span’).css(‘display’, ‘block’);
        }
        else {
            div.css(‘height’, ’25px’);
            div.css(‘width’, ‘auto’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’18px’);
            div.css(‘padding-top’, ‘0px’);
            div.css(‘padding-bottom’, ‘0px’);
            div.find(‘span’).css(‘display’, ‘none’);
        }
    }

    window.changeSlide = function (item) {
        //get all title containers
        var listItems = document.querySelectorAll(‘.titleContainerClass’);
        var currentIndexVals = { 0: null, 1: null, 2: null, 3: null };
        var newIndexVals = { 0: null, 1: null, 2: null, 3: null };

        for (var i = 0; i < listItems.length; i++) {
            //current Index
            currentIndexVals[i] = parseInt(listItems[i].getAttribute(‘data-currentidx’));
        }

        var selectedIndex = 0; //selected Index will always be 0
        var leftOffset = ”;
        var originalSelectedIndex = ”;

        var nextSelected = ”;
        var originalNextIndex = ”;

        if (item == null) {
            var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
            originalSelectedIndex = parseInt(item0.getAttribute(‘data-originalidx’));
            originalNextIndex = originalSelectedIndex + 1;
            nextSelected = currentIndexVals[0] + 1;
        }
        else {
            nextSelected = item.getAttribute(‘data-currentidx’);
            originalNextIndex = item.getAttribute(‘data-originalidx’);
        }

        if (nextSelected == 0) { return; }

        for (i = 0; i < listItems.length; i++) {
            if (currentIndexVals[i] == selectedIndex) {
                //this is the selected item, so move to bottom and animate
                var div = $(‘[data-currentidx=”0″]’);
                cssTitle(div, false);
                div.css(‘left’, ‘-400px’);
                div.css(‘top’, ‘230px’);

                newIndexVals[i] = 3;
                var item0 = document.querySelector(‘[data-currentidx=”0″]’);
                originalSelectedIndex = item0.getAttribute(‘data-originalidx’);

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else if (currentIndexVals[i] == nextSelected) {
                //this is the NEW selected item, so resize and slide in as selected
                var div = $(‘[data-currentidx=”‘ + nextSelected + ‘”]’);
                cssTitle(div, true);
                div.css(‘left’, ‘-610px’);

                newIndexVals[i] = 0;

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else {
                //move up in queue
                var curIdx = currentIndexVals[i];
                var div = $(‘[data-currentidx=”‘ + curIdx + ‘”]’);

                var topStr = div.css(‘top’);
                var topInt = parseInt(topStr.substring(0, topStr.length – 1));

                if (curIdx != 1 && nextSelected == 1 || curIdx > nextSelected) {
                    topInt = topInt – 35;
                    if (curIdx – 1 == 2) { newIndexVals[i] = 2 };
                    if (curIdx – 1 == 1) { newIndexVals[i] = 1 };
                }

                //move up
                div.animate(
                    { top: topInt }, 500, function () {
                    });
            }
        };

        if (originalNextIndex < 0)
            originalNextIndex = itemCount – 1;

        //adjust pictures
        $(‘#divPic_’ + originalNextIndex).css(‘left’, ‘610px’);
        leftOffset = ‘-610px’;

        $(‘#divPic_’ + originalSelectedIndex).animate(
            { left: leftOffset }, 500, function () {
            });

        $(‘#divPic_’ + originalNextIndex).animate(
            { left: ‘0px’ }, 500, function () {
            });

        var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
        var item1 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[1] + ‘”]’);
        var item2 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[2] + ‘”]’);
        var item3 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[3] + ‘”]’);
        if (newIndexVals[0] != null) { item0.setAttribute(‘data-currentidx’, newIndexVals[0]) };
        if (newIndexVals[1] != null) { item1.setAttribute(‘data-currentidx’, newIndexVals[1]) };
        if (newIndexVals[2] != null) { item2.setAttribute(‘data-currentidx’, newIndexVals[2]) };
        if (newIndexVals[3] != null) { item3.setAttribute(‘data-currentidx’, newIndexVals[3]) };
    };
</script>

 

End-result of script editors in SharePoint Online

Separate authoring site collection

Final Thoughts

How To : Create, Edit and Maintaining a Coded UI Test for Silverlight Application

Using the Microsoft Visual Studio 2013 Coded UI Test plugin for Silverlight, you can create Coded UI Tests or action recordings for Silverlight 5.0 applications.

Using Microsoft Microsoft Visual Studio 2010 Feature Pack 2, you can create coded UI tests or action recordings for Silverlight 4 applications. Action recordings let you fast forward through steps in a manual test. For more information about action recordings or coded UI tests, see How to: Create an Action Recording or How to: Create a Coded UI Test.

In this walkthrough, you will learn the procedures that are required to test a Silverlight control in a Silverlight based application. The walkthrough takes you through the following procedures:

Prerequisites

 

For this walkthrough you will need:

To prepare the walkthrough

  1. Verify that you have the Silverlight 4 developer runtime available at Silverlight Developer 4 for Developers.

  2. Verify that you have completed the procedures in Walkthrough: Creating a RIA Services Solution.

    The result will be a simple Silverlight application that uses a Silverlight grid control. Later, you will use the grid control in this walkthrough and perform coded UI tests on it.

  3.  

     

    For more information about supported and unsupported Silverlight controls, see How to: Set Up Your Silverlight Application for Testing.

  4. With the RIAServicesExample you created in Walkthrough: Creating a RIA Services Solution running, copy the address of the Web application to the clipboard or a notepad file. For example, the address might resemble this: http://localhost: <port number>/RIAServicesExampleTestPage.aspx.

Add the SilverlightUIAutomationHelper.dll to Your Silverlight 4 Project

 

To test your Silverlight applications, you must add Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper.dll as a reference to your Silverlight 4 application so that the Silverlight controls can be identified. This helper assembly instruments your Silverlight application to enable the information about a control to be available to the Silverlight plugin API that you use in your coded UI test or is used for an action recording.This assembly cannot be redistributed. Therefore, you must add this reference conditionally when you want to build the application. By taking this approach the assembly is not redistributed when you deploy your software to a customer.

To add the SilverlightUIAutomationHelper.dll

  1. For each Silverlight project in your solution that you want to test, you must add the SilverlightUIAutomationHelper.dll. In Solution Explorer, right-click the RIAServicesExample project, select Unload Project.

    The project is displayed in Solution Explorer as RIAServicesExample (unavailable).

  2. Right-click the project again and then click Edit RIAServicesExample.csproj.

    The RIAServicesExample.csproj file is opened in the Code Editor. You will see <PropertyGroup> nodes followed by <ItemGroup> nodes. You must make the following two modifications:

    1. To set the production condition, add the following entry to the first <PropertyGroup> node:

       
      <Production Condition="'$(Production)'==''">False</Production>
      
    2. To add the DLL when the build is not a production build, insert the following <Choose> node after the <PropertyGroup> nodes, but before the <ItemGroup> nodes:

       
      <Choose>
         <When Condition=" '$(Production)'=='False' ">
               <ItemGroup>
                 <Reference Include="Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper">
                 </Reference>
               </ItemGroup>
             </When>
       </Choose>
      
  3. To save the file, click Save.

  4. To reload these changes, right-click the server project and then click Reload Project

    Caution noteCaution

    If you have multiple Silverlight projects that you want to test, you must follow these steps for each project.

    Important noteImportant

    To remove the SilverlightUIAutomationHelper.dll so that it is not redistributed with your production code, set the production condition value to true in the first <PropertyGroup> node. In in this manner, the DLL is no longer added as a reference by the Choose node that you added to the project in the previous procedure. You can also set an environment variable named Production to the value True. Then you can use msbuild to build the Silverlight project and remove the SilverlightUIAutomationHelper.dll.

Create a Coded UI Test for RIAServicesExample Silverlight Application

 

To Create a Coded UI Test

  1. In Solution Explorer, right-click the solution, click Add and then select New Project.

    The Add New Project dialog box appears.

  2. In the Installed Templates pane, expand either Visual C# or Visual Basic, and then select Test.

  3. In the middle pane, select the Test Project template.

  4. Click OK.

    In Solution Explorer, the new test project named TestProject1 is added to your solution. Either the UnitTest1.cs or UnitTest1.vb file appears in the Code Editor. You can close the UnitTest1 file because it is not used in this walkthrough.

  5. In Solution Explorer, right-click TestProject1, click Add and then select Coded UI test.

    The Generate Code for Coded UI Test dialog box appears.

  6. Select the Record actions, edit UI map or add assertions option and then click OK.

    The UIMap – Coded UI Test Builder appears.

    For more information about the options in the dialog box, see How to: Create a Coded UI Test.

  7. Click Start Recording on the UIMap – Coded UI Test Builder. In several seconds, the Coded UI Test Builder will be ready.

    Start recording UI

  8. Launch Internet Explorer.

  9. In Internet Explorer’s address bar, enter the address of the Web application that you copied in a previous procedure. For example:

    http://localhost: <port number>/RIAServicesExampleTestPage.aspx

  10. Click one or two of the column headers to sort the data.

  11. Close Internet Explorer.

  12. On the UIMap – Coded UI Test Builder, click Generate Code.

  13. In the Method Name type SimpleSilverlightAppTest and then click Add and Generate. In several seconds, the Coded UI test appears and is added to the Solution.

  14. Close the UIMap – Coded UI Test Builder.

    The CodedUITest1.cs file appears in the Code Editor.

    NoteNote

    You can assign a unique automation property based on the type of Silverlight control in your application. For more information, see Set a Unique Automation Property for Silverlight Controls for Testing.

Run the Coded UI Test on the RIAServicesExample Silverlight Application

 

To run the coded UI test

  • On the Test menu, select Windows and then click Test View.In Test View, select CodedUITestMethod1 under the Test Name column and then click Run Selection in the toolbar.

    The coded UI test should successfully run using the Silverlight data grid control.

How to connect a SharePoint 2013 Document Library to Outlook 2013

 

How to connect a SharePoint 2013 Document Library to Outlook 2013One of the key methods of gaining User Adoption of SharePoint is ensuring and pushing the integration it has with Microsoft Office to information workers. After all, information workers generally use Outlook as their ‘mother-ship’. Getting those users to switch immediately to SharePoint or, asking them to visit a document library in a SharePoint site which they will need to access could take time, especially since it means opening a browser, navigating to the site, covering their beloved Outlook client in the process.

 

The following describes how to connect a typical SharePoint 2013 document library to Outlook 2013 client.

  1. Access your SharePoint site; go into the relevant Documents library. In the below example, I clicked on the default Team Site Documents repository link in the Quick Launch bar, which has around 140 documents.

 

  1. Ok, that’s the Document library displayed, now to get to the Library Tab on the Ribbon bar; the option we are looking for is within the Library options available there.

  1. When the Library ribbon is displayed, click the Connect To Outlook button in the Connect & Export section. Note. If Connect to Outlook is greyed out ensure that Outlook 2013 is fully operational. I’ve come across examples where Outlook is installed, but no email account has been enabled in Outlook – if that’s the case this button will be greyed out.

  1. Once the Connect To Outlook button is clicked, you may receive a warning message informing you that you allow SharePoint to connect with Outlook – Click ALLOW.

  1. Outlook 2013 will be displayed. A dialog will also then be displayed that asks you to confirm that you wish to connect the Document Library to Outlook. The below dialog shows the Site Name and Document Library title, along with the URL of the document library being connected. Below, and to the right is a button that shows more information about the connection (ADVANCED button). The following screenshot shows the information displayed if the Advanced Button is clicked. There is not much you can do on that screen, for now, click YES to confirm the connection.

Here’s an example of the Advanced dialog associated. The most interesting aspect is the Permissions line. For Document Library connection to Outlook, this will be set as READ. This is by design, and for good reason. Things like classified metadata are not exposed to be writeable from Outlook including other document library settings like CheckIn/Out etc. However, this does not prevent you from modifying a file in the resultant list. If the document needs to be updated, simply double-click on the document which will open it in the local application, click the edit offline option, make your changes, click save, click close, and then a prompt should appear to allow you to update to the server.

Once completed, the documents will be listed in Outlook. The following screenshot shows the result of a Shared Document Library from a SharePoint 2013 site connected to Outlook 2013. Note the following features which in my view are awesome for User Adoption particularly from those whose centre of the universe happens to be the Outlook client; without going into jargon try to explain the following features:

  • That users are able to switch from connected library to connected library using the navigation options, each connected library shows the number of unread items (un-previewed or un-opened documents).
  • That each document (if the previewer is available) when clicked on will display a preview of the document; meaning that you can read a Word Document, for example, without having to open it in the client application.
  • That information concerning the state of the document is displayed, showing the last modifier, whether the document is checked out, when it was last modified and the document size.

 

Note. There is a problem I have noticed in the preview section when highlighting any file whilst working with SharePoint 2013, Office 2013 on a sandbox; the message:

‘This file cannot be previewed because of an error with the following previewer: Microsoft xxxxx previewer – To open this file in its own program, double-click it;’.

There is an article that seems to describe the issue (but does not directly mention when it’s likely to occur); and is known to Microsoft. A description of the alternatives whilst a fix is being provided here: http://support.microsoft.com/kb/983097. I will further investigate this and update this article.

A Look At : SharePoint 2013 Site Templates

hero-for-hire_basic-layout_600
SharePoint 2013 offers a vast variety of out-of-the-box site templates. One of the success factors of your SharePoint deployment is choosing the most suitable site template that meets your business needs.

I’ve been asked many times which site template can serve particular required needs and what differs one template from another, so I decided to write a quick overview of all the available SharePoint 2013 site templates and their common uses.

Collaboration Site Templates

  • Team Site – The most common SharePoint site template, mainly used by teams to collaborate, organize, create, and share information and documents.

  • Blog – a site on which a user or group of users write opinions and share information.

  • Developer Site – this site template is focused on Apps for Office development. Developers can build, test and publish their apps here.

  • Project Site – this site template is used for managing and collaborating on a project. Project site coordinates project status and all additional information relevant to the project.

  • Community Site – a site where the community members can explore, discover content and discuss common topics.

 

Enterprise Site Templates

  • Document Center – this site is used to centrally manage documents in your enterprise.

  • eDiscovery Center – this site is used to manage, search and export content for investigations matters.

  • Records Center – this site is used to submit and find important documents that should be stored for long-term archival.

  • Business Intelligence Center – this site is used for providing access to Business Intelligence content in SharePoint.

  • Enterprise Search Center – this site delivers an enterprise search experience.  Users can access the enterprise search center to perform general searches, people searches, conversation or video searches, all in one place. You can easily customize search results pages.

  • My Site Host – this site is used for hosting public profile pages and personal sites. This site can be available after configuration of the User Profile Service Application.

  • Community Portal – this site is used for discovering new communities across the enterprise.

  • Basic Search Center – this site is delivering the basic search experience.

  • Visio Process Repository – this site allows you sharing and viewing Visio process diagrams.

Publishing Site Templates

  • Publishing Portal – this site template is used for an internet-facing sites or a large intranet portals.

  • Enterprise Wiki – this site is used for publishing knowledge that you want to share across the enterprise.

  • Product Catalog – this site is used for managing product catalogs.

If none of those SharePoint site templates meets your needs you can always create custom templates.

 

This will be the focus of a future blog post as I am busy finishing a FREE Custom Knowledge Base Site Template

Some of the features will include :

  • Creating an ALM web and site template, setup life cycle management and deployment
  • Advanced functionality using Managed Metadata and BCS
  • Document Conversion using Word Automation Services
  • Using the search to build out our feature functionality
  • An Office 365 and SharePoint Online version

 

How To : Develop and Deploy Azure Applications Deep Dive (Part 1)

micorosftazurelogo[1]

 

Senior C# SharePoint Developer with 10 years experience and BSC Degree looking for serious new technical challenges and collaborative, team environment (Gauteng, South Africa)

CV available at http://1drv.ms/1lR6qtO

hero-for-hire_basic-layout_600

There are many reasons to deploy an application or services onto Azure, the Microsoft cloud services platform. These include reducing operation and hardware costs by paying for just what you use, building applications that are able to scale almost infinitely, enormous storage capacity, geo-location … the list goes on and on.

Yet a platform is intellectually interesting only when developers can actually use it. Developers are the heart and soul of any platform release—the very definition of a successful release is the large number of developers deploying applications and services on it. Microsoft has always focused on providing the best development experience for a range of platforms—whether established or emerging—with Visual Studio, and that continues for cloud computing. Microsoft added direct support for building Azure applications to Visual Studio 2010 and Visual Web Developer 2010 Express.

This article will walk you through using Visual Studio 2010 for the entirety of the Azure application development lifecycle. Note that even if you aren’t a Visual Studio user today, you can still evaluate Azure development for free, using the Azure support in Visual Web Developer 2010 Express.

Creating a Cloud Service

Start Visual Studio 2010, click on the File menu and choose New | Project to bring up the New Project dialog. Under Installed Templates | Visual C# (or Visual Basic), select the Cloud node. This displays an Enable Azure Tools project template that, when clicked, will show you a page with a button to install the Azure Tools for Visual Studio.

Before installing the Azure Tools, be sure to install IIS on your machine. IIS is used by the local development simulation of the cloud. The easiest way to install IIS is by using the Web Platform Installer available at microsoft.com/web. Select the Platform tab and click to include the recommended products in the Web server.

Download and install the Azure Tools and restart Visual Studio. As you’ll see, the Enable Azure Tools project template has been replaced by a Azure Cloud Service project template. Select this template to bring up the New Cloud Service Project dialog shown in Figure 1. This dialog enables you to add roles to a cloud service.

image: Adding Roles to a New Cloud Service Project

Figure 1 Adding Roles to a New Cloud Service Project

A Azure role is an individually scalable component running in the cloud where each instance of a role corresponds to a virtual machine (VM) instance.

There are two types of role:

  • A Web role is a Web application running on IIS. It is accessible via an HTTP or HTTPS endpoint.
  • A Worker role is a background processing application that runs arbitrary .NET code. It also has the ability to expose Internet-facing and internal endpoints.

As a practical example, I can have a Web role in my cloud service that implements a Web site my users can reach via a URL such as http://%5Bsomename%5D.cloudapp.net. I can also have a Worker role that processes a set of data used by that Web role.

I can set the number of instances of each role independently, such as three Web role instances and two Worker role instances, and this corresponds to having three VMs in the cloud running my Web role and two VMs in the cloud running my Worker role.

You can use the New Cloud Service Project dialog to create a cloud service with any number of Web and Worker roles and use a different template for each role. You can choose which template to use to create each role. For example, you can create a Web role using the ASP.NET Web Role template, WCF Service Role template, or the ASP.NET MVC Role template.

After adding roles to the cloud service and clicking OK, Visual Studio will create a solution that includes the cloud service project and a project corresponding to each role you added. Figure 2 shows an example cloud service that contains two Web roles and a Worker role.

image: Projects Created for Roles in the Cloud Service

Figure 2 Projects Created for Roles in the Cloud Service

The Web roles are ASP.NET Web application projects with only a couple of differences. WebRole1 contains references to the following assemblies that are not referenced with a standard ASP.NET Web application:

  • Microsoft.WindowsAzure.Diagnostics (diagnostics and logging APIs)
  • Microsoft.WindowsAzure.ServiceRuntime (environment and runtime APIs)
  • Microsoft.WindowsAzure.StorageClient (.NET API to access the Azure storage services for blobs, tables and queues)

The file WebRole.cs contains code to set up logging and diagnostics and a trace listener in the web.config/app.config that allows you to use the standard .NET logging API.

The cloud service project acts as a deployment project, listing which roles are included in the cloud service, along with the definition and configuration files. It provides Azure-specific run, debug and publish functionality.

It is easy to add or remove roles in the cloud service after project creation has completed. To add other roles to this cloud service, right-click on the Roles node in the cloud service and select Add | New Web Role Project or Add | New Worker Role Project. Selecting either of these options brings up the Add New Role dialog where you can choose which project template to use when adding the role.

You can add any ASP.NET Web Role project to the solution by right-clicking on the Roles node, selecting Add | Web Role Project in the solution, and selecting the project to associate as a Web role.

To delete, simply select the role to delete and hit the Delete key. The project can then be removed.

You can also right-click on the roles under the Roles node and select Properties to bring up a Configuration tab for that role (see Figure 3). This Configuration tab makes it easy to add or modify the values in both the ServiceConfiguration.cscfg and ServiceDefinition.csdef files.

Figure 3 Configuring a Role

Figure 3 Configuring a Role

When developing for Azure, the cloud service project in your solution must be the StartUp project for debugging to work correctly. A project is the StartUp project when it is shown in bold in the Solution Explorer. To set the active project, right-click on the project and select Set as StartUp project.

Data in the Cloud

Now that you have your solution set up for Azure, you can leverage your ASP.NET skills to develop your application.

As you are coding, you’ll want to consider the Azure model for making your application scalable. To handle additional traffic to your application, you increase the number of instances for each role. This means requests will be load-balanced across your roles, and that will affect how you design and implement your application.

In particular, it dictates how you access and store your data. Many familiar data storage and retrieval methods are not scalable, and therefore are not cloud-friendly. For example, storing data on the local file system shouldn’t be used in the cloud because it doesn’t scale.

To take advantage of the scaling nature of the cloud, you need to be aware of the new storage services. Azure Storage provides scalable blob, queue, and table storage services, and Microsoft SQL Azure provides a cloud-based relational database service built on SQL Server technologies. Blobs are used for storage of named files along with metadata. The queue service provides reliable storage and delivery of messages. The table service gives you structured storage, where a table is a set of entities that each contain a set of properties.

To help developers use these services, the Azure SDK ships with a Development Storage service that simulates the way these storage services run in the cloud. That is, developers can write their applications targeting the Development Storage services using the same APIs that target the cloud storage services.

Debugging

To demonstrate running and debugging on Azure locally, let’s use one of the samples from code.msdn.microsoft.com/windowsazuresamples. This MSDN Code Gallery page contains a number of code samples to help you get started with building scalable Web application and services that run on Azure. Download the samples for Visual Studio 2010, then extract all the files to an accessible location like your Documents folder.

The Development Fabric requires running in elevated mode, so start Visual Studio 2010 as an administrator. Then, navigate to where you extracted the samples and open the Thumbnails solution, a sample service that demonstrates the use of a Web role and a Worker role, as well as the use of the StorageClient library to interact with both the Queue and Blob services.

When you open the solution, you’ll notice three different projects. Thumbnails is the cloud service that associates two roles, Thumbnails_WebRole and Thumbnails_WorkerRole. Thumbnails_WebRole is the Web role project that provides a front-end application to the user to upload photos and adds a work item to the queue. Thumbnails_WorkerRole is the Worker role project that fetches the work item from the queue and creates thumbnails in the designated directory.

Add a breakpoint to the submitButton_Click method in the Default.aspx.cs file. This breakpoint will get hit when the user selects an image and clicks Submit on the page.

 
protected void submitButton_Click(
  object sender, EventArgs e) {
  if (upload.HasFile) {
    var name = string.Format("{0:10}", DateTime.Now.Ticks, 
      Guid.NewGuid());
    GetPhotoGalleryContainer().GetBlockBlob(name).UploadFromStream(upload.FileContent);

Now add a breakpoint in the Run method of the worker file, WorkerRole.cs, right after the code that tries to retrieve a message from the queue and checks if one actually exists. This breakpoint will get hit when the Web role puts a message in the queue that is retrieved by the worker.

 
while (true) {
  try {
    CloudQueueMessage msg = queue.GetMessage();
    if (msg != null) {
      string path = msg.AsString

To debug the application, go to the Debug menu and select Start Debugging. Visual Studio will build your project, start the Development Fabric, initialize the Development Storage (if run for the first time), package the deployment, attach to all role instances, and then launch the browser pointing to the Web role (see Figure 4).

image: Running the Thumbnails Sample

Figure 4 Running the Thumbnails Sample

At this point, you’ll see that the browser points to your Web role and that the notifications area of the taskbar shows the Development Fabric has started. The Development Fabric is a simulation environment that runs role instances on your machine in much the way they run in the real cloud.

Right-click on the Azure notification icon in the taskbar and click on Show Development Fabric UI. This will launch the Development Fabric application itself, which allows you to perform various operations on your deployments, such as viewing logs and restarting and deleting deployments (see Figure 5). Notice that the Development Fabric contains a new deployment that hosts one Web role instance and one Worker role instance.

image: The Development Fabric

Figure 5 The Development Fabric

Look at the processes that Visual Studio attached to (Debug/Windows/Processes); you’ll notice there are three: WaWebHost.exe, WaWorkerHost.exe and iexplore.exe.

WaWebHost (Azure Web instance Host) and WaWorkerHost (Azure Worker instance Host) host your Web role and Worker role instances, respectively. In the cloud, each instance is hosted in its own VM, whereas on the local development simulation each role instance is hosted in a separate process and Visual Studio attaches to all of them.

By default, Visual Studio attaches using the managed debugger. If you want to use another one, like the native debugger, pick it from the Properties of the corresponding role project. For Web role projects, the debugger option is located under the project properties Web tab. For Worker role projects, the option is under the project properties Debug tab.

By default, Visual Studio uses the script engine to attach to Internet Explorer. To debug Silverlight applications, you need to enable the Silverlight debugger from the Web role project Properties.

Browse to an image you’d like to upload and click Submit. Visual Studio stops at the breakpoint you set inside the submitButton_Click method, giving you all of the debugging features you’d expect from Visual Studio. Hit F5 to continue; the submitButton_Click method generates a unique name for the file, uploads the image stream to Blob storage, and adds a message on the queue that contains the file name.

Now you will see Visual Studio pause at the breakpoint set in the Worker role, which means the worker received a message from the queue and it is ready to process the image. Again, you have all of the debugging features you would expect.

Hit F5 to continue, the worker will get the file name from the message queue, retrieve the image stream from the Blob service, create a thumbnail image, and upload the new thumbnail image to the Blob service’s thumbnails directory, which will be shown by the Web role.

Next installment, we will look at the Deployment Process

How To : Develop a Single-page Application in SharePoint

sharepointKnockoutJS

A single-page application in SharePoint

This app will be a single-page app and heavily javascript based, taking advantage of ajax and web services. As mentioned earlier, we’re going to base this app on the TodoMVC project. More specifically, we’re going to use the Knockout version of the TodoMVC app. So download the knockout todomvc app from github here and incorporate it into your project as follows:

  • Copy the js and bower_components folders into the Scripts folder. To do this quickly:
    1. Copy the folders in Windows Explorer
    2. Visual Studio, enable Project -> Show All Files
    3. The folders will now appear in Solution Explorer. Right click bower_components, and select Include In Project. Do the same for the js folder.
  • Copy the contents of index.html into Views/Index.cshtml (discard whatever is there).
  • Open Views/Index.cshtml and edit the script and css references to point to the Scripts folder (eg. find any references to bower_components and change it to Scripts/bower_components, do the same for js)
  • Open the Shared/_Layout.cshtml file and replace its contents with a single call to @RenderBody():

Your solution should now look like the following:

Hit F5 and you should now have a running TodoMVC app!

Data Storage

In previous articles I have described how to use the Azure database for storage of your data. In a provider-hosted app, you can equally as easily store your data in your own database. However, performance issues aside, it’s really handy to store data in the customer’s SharePoint system itself where possible. This has a number of advantages:

  • Security – you don’t need to store customer data in your own data center
  • App interoperability – if you have multiple apps talking to SharePoint, the architecture will be greatly simplified by by using SharePoint as the central data store
  • Transparency – customers can see their data transparently in lists and understand better what data is stored and how it’s used
  • Tight SharePoint integration – this is often useful since you can later easily take advantage of features such as workflows and event receivers.

In this article therefore I’ll use SharePoint lists for data storage. Let’s create a list to use for storage of our Todo items. Firstly, right-click on the TodoApp project, and select Add -> New Item. Choose List, and call it TodoList.

Click Add. On the next dialog, leave the list template as Default and click Finish.

Now you’ll be presented with a list designer form. By default it already has a Title column, so just add another column called Completed which should be a Boolean:

In my case, Completed was available as a site column. However, it was hidden by default. To remedy this, open up Schema.xml and ensure the Completed field has Hidden=FALSE:

Open up the TodoListInstance designer again and click the Views tab, and add the Completed column to the default view:

Now we’re going to add the server-side code for retrieving the list items. Firstly add a class called TodoItemViewModel and give it the relevant properties<!–. Note that the properties are not capitalised to match what we’ve got on the client side already–>:

    public class TodoItemViewModel
    {
        public string Title { get; set; }
        public bool Completed { get; set; }
    }
    

Next, open up the HomeController class and change the Index method to load the contents of the TodoList:

    [SharePointContextFilter]
    public ActionResult Index()
    {
        List<TodoItemViewModel> result = new List<TodoItemViewModel>();
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if(clientContext != null)
            {
                //Load list items
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItemCollection items = list.GetItems(CamlQuery.CreateAllItemsQuery());
                clientContext.Load(items);
                clientContext.ExecuteQuery();
                //Create the Todo item view models
                result = items.Select(li => new TodoItemViewModel()
                {
                    Title = (string)li["Title"],
                    Completed = (bool)li["Completed"]
                }).ToList();
            }
        }
        return View(result); //Pass the items into the view
    }
    

You may notice we’re using CreateAppOnlyClientContextForSPAppWeb. This means we are accessing the list under the App identity, not the user identity. This is so that we don’t need to require the user themselves to have any special permissions – this should an important consideration throughout your app, as different resources may be only available to certain users and this will form a crucial part of your security model.

Now, since we want to access the list under the app identity, we need to enable that by opening AppManifest.xml under TodoApp, click the Permissions tab and check the option:

Client Side Code with Knockout.js

Now we move to the client side. We’re going to do something a bit scary and rewrite the app.js file, which contains all of the Todo app JavaScript we imported. We’re going to rewrite it so that firstly, we understand it, and secondly, we can integrate it with our AJAX methods more easily. You can always go back to study the original later on as it’ll be more advanced and refined.

If you are new to the Knockout way of data-binding in JavaScript, it might be a good time to visit the knockout.js website to familiarise yourself with it. It’s very powerful and yet pretty easy to pick up. and it makes writing javascript applications incredibly fast and easy.

So open app.js, delete what’s already there, and let’s start by creating a simple ViewModel for each Todo item:

    window.TodoApp = window.TodoApp || {};
    
    window.TodoApp.Todo = function (id, title, completed) {
        var me = this;
        this.id = ko.observable(id);
        this.title = ko.observable(title);
        this.completed = ko.observable(completed || false);
        this.editing = ko.observable(false);

        // edit an item
        this.startEdit = function () {
            me.editing(true);
        };

        // stop editing an item
        this.stopEdit = function (data, event) {
            if (event.keyCode == 13) {
                me.editing(false);
            }
            return true;
        };
    };

    

Next up we add the main ViewModel:

    window.TodoApp.ViewModel = function (spHostUrl) {
        var me = this;
        this.todos = ko.observableArray(); //List of todos
        this.current = ko.observable(); // store the new todo value being entered
        this.showMode = ko.observable('all'); //Current display mode

        //List which is currently displayed
        this.filteredTodos = ko.computed(function () {
            switch (me.showMode()) {
            case 'active':
                return me.todos().filter(function (todo) {
                    return !todo.completed();
                });
            case 'completed':
                return me.todos().filter(function (todo) {
                    return todo.completed();
                });
            default:
                return me.todos();
            }
        });        

        this.addTodo = function (todo) {
            me.todos.push(todo);
        }

        // add a new todo, when enter key is pressed
        this.add = function (data, event) {
            if (event.keyCode == 13) {
                var current = me.current().trim();
                if (current) {        
                    var todo = new window.TodoApp.Todo(0, current);
                    me.addTodo(todo);
                    me.current('');
                }
            }
            return true;
        };

        // remove a single todo
        this.remove = function (todo) {
            me.todos.remove(todo);
        };

        // remove all completed todos
        this.removeCompleted = function () {
            var todos = me.todos().slice(0);
            for (var i = 0; i < todos.length; i++) {
                if (todos[i].completed()) {
                    me.remove(todos[i]);
                }
            }
        };

        // count of all completed todos
        this.completedCount = ko.computed(function () {
            return me.todos().filter(function (todo) {
                return todo.completed();
            }).length;
        });

        // count of todos that are not complete
        this.remainingCount = ko.computed(function () {
            return me.todos().length - me.completedCount();
        });

        // writeable computed observable to handle marking all complete/incomplete
        this.allCompleted = ko.computed({
            //always return true/false based on the done flag of all todos
            read: function () {
                return !me.remainingCount();
            },
            // set all todos to the written value (true/false)
            write: function (newValue) {
                me.todos().forEach(function (todo) {
                    // set even if value is the same, as subscribers are not notified in that case
                    todo.completed(newValue);
                });
            }
        });
    };

Take a read through the above JavaScript – I have tried to simplify it from the original TodoMVC Knockout code so it should be a little more understandable if you’re new to Knockout.

Next we’re going to change the HTML to match our updated JavaScript. Open index.cshtml and replace the entire contents of the <body> tag with the following:

    <section id="todoapp">
        <header id="header">
            <h1>todos</h1>
            <input id="new-todo" data-bind="value: current, valueUpdate: 'afterkeydown', event: { keypress: add }" placeholder="What needs to be done?" autofocus>
        </header>
        <section id="main" data-bind="visible: todos().length">
            <input id="toggle-all" data-bind="checked: allCompleted" type="checkbox">
            <label for="toggle-all">Mark all as complete</label>
            <ul id="todo-list" data-bind="foreach: filteredTodos">
                <li data-bind="css: { completed: completed, editing: editing }">
                    <div class="view">
                        <input class="toggle" data-bind="checked: completed" type="checkbox">
                        <label data-bind="text: title, event: { dblclick: startEdit }"></label>
                        <button class="destroy" data-bind="click: $root.remove"></button>
                    </div>
                    <input class="edit" data-bind="value: title, valueUpdate: 'afterkeydown', event: { keypress: stopEdit }">
                </li>
            </ul>
        </section>
        <footer id="footer" data-bind="visible: completedCount() || remainingCount()">
            <span id="todo-count">
                <strong data-bind="text: remainingCount">0</strong> item(s) left
            </span>
            <ul id="filters">
                <li>
                    <a data-bind="css: { selected: showMode() == 'all' }, click: function(){showMode('all');}">All</a>
                </li>
                <li>
                    <a data-bind="css: { selected: showMode() == 'active' }, click: function(){showMode('active');}">Active</a>
                </li>
                <li>
                    <a data-bind="css: { selected: showMode() == 'completed' }, click: function(){showMode('completed');}">Completed</a>
                </li>
            </ul>
            <button id="clear-completed" data-bind="visible: completedCount, click: removeCompleted">
                Clear completed (<span data-bind="text: completedCount"></span>)
            </button>
        </footer>
    </section>
    <footer id="info">
        <p>Double-click to edit a todo</p>
        <p>Written by <a href="https://github.com/ashish01/knockoutjs-todos">Ashish Sharma</a> and <a href="http://knockmeout.net">Ryan Niemeyer</a></p>
        <p>Part of <a href="http://todomvc.com">TodoMVC</a></p>
    </footer>
    <script src="Scripts/bower_components/todomvc-common/base.js"></script>
    <script src="Scripts/bower_components/knockout.js/knockout.debug.js"></script>
    <script src="Scripts/jquery-1.10.2.js"></script>
    <script src="Scripts/js/app.js"></script>
        
    <script>
        var viewModel = new window.TodoApp.ViewModel(spHostUrl);
        ko.applyBindings(viewModel);
    </script>

Again, this is slightly simplified from the original TodoMVC code.

You should now be able to run the app as before. In the next step we’ll add the client-side code required to load the data back from the server. Update the final script block in index.cshtml to the following:

        
    <script>
        var model = @(Html.Raw(Json.Encode(Model)));
        var viewModel = new window.TodoApp.ViewModel();

        for(var i=0; i<model.length; i++) {
            viewModel.addTodo(new window.TodoApp.Todo(model[i].Id, model[i].Title, model[i].Completed));
        }

        ko.applyBindings(viewModel);
    </script>

    

Now we’re ready to give the app a quick test. Hit F5 and wait for you app to open. Since we don’t yet have a way of saving data, we’re going to add some directly into the SharePoint list. Just browse to the list using the url /[YourSPsite]/TodoApp/Lists/TodoList/ and you should see the standard List UI:

Add a couple of test items and refresh your app:

Saving Data via AJAX

The final piece of the puzzle is to react to user input and save the data back to the server. When an item is deleted, we’ll need to know the Id of the SharePoint ListItem which should be deleted. Therefore, we’ll add an Id property to the TodoItemViewModel class:

    public class TodoItemViewModel
    {
        public int Id { get; set; } //New!
        public string Title { get; set; }
        public bool Completed { get; set; }
    }
    

Don’t forget to set the Id property when we’re loading the items in the HomeController Index method:

Now we’ll create web service methods for adding, deleting and updating an item. Add these methods to HomeController:

    [HttpPost]
    public JsonResult AddItem(string title)
    {
        int newItemId = 0;
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if (clientContext != null)
            {
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItem listItem = list.AddItem(new ListItemCreationInformation());
                listItem["Title"] = title;
                listItem.Update();
                clientContext.Load(listItem, li => li.Id);
                clientContext.ExecuteQuery();
                newItemId = listItem.Id;
            }
        }
        return Json(newItemId, JsonRequestBehavior.AllowGet);
    }

    [HttpPost]
    public JsonResult RemoveItem(int id)
    {
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if (clientContext != null)
            {
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItem item = list.GetItemById(id);
                item.DeleteObject();
                clientContext.ExecuteQuery();
            }
        }
        return new JsonResult();
    }

    [HttpPost]
    public JsonResult UpdateItem(int id, string title, bool completed)
    {
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if (clientContext != null)
            {
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItem item = list.GetItemById(id);
                item["Title"] = title;
                item["Completed"] = completed;
                item.Update();
                clientContext.ExecuteQuery();
            }
        }
        return new JsonResult();
    }
    

The above methods use fairly simple CSOM code to add, remove and update a list item. You can accomplish the same goal by using the JavaScript version of the CSOM which would have the added benefit of calling SharePoint directly without going via the app server. However, I haven’t done this because I want to illustrate how to make AJAX calls between the app client and server. click here to read more about the javascript CSOM library.

Now we’ll add the JavaScript code to call these server methods when an add, update or delete occurs. Earlier you may have noticed that we included a reference to the jQuery script. This is because we’re going to use jQuery’s AJAX helper methods:

Now all we need to do is add code to react to the add, remove and update events, and call the server methods. The first step is to open HomeController.cs and add the SPHostUrl, which is a crucial value for authentication, to the ViewBag:

The purpose of this is so that we can access SPHostUrl on the client, and pass it back to the server during AJAX requests. The authentication cookie will also be passed, and together this forms everything required for authentication to take place.

Next, open up the Index.cshtml file and update the last script block to match the following:

    <script>
        var model = @(Html.Raw(Json.Encode(Model)));
        
        var spHostUrl = '@ViewBag.SPHostUrl'; //<-- Add this line
        var viewModel = new window.TodoApp.ViewModel(spHostUrl); <-- Pass SPHostUrl into the ViewModel

        for(var i=0; i<model.length; i++) {        
            viewModel.addTodo(new window.TodoApp.Todo(model[i].Id, model[i].Title, model[i].Completed));
        }
        ko.applyBindings(viewModel);
    </script>
    

Next we’ll update the app.js code to call the web services at appropriate times by adding HTTP requests using jQuery’s $.ajax helper. Firstly, update the add function:

    this.add = function (data, event) {
        if (event.keyCode == 13) {
            var current = me.current().trim();
            if (current) {
                var todo = new window.TodoApp.Todo(0, current);
                me.addTodo(todo);
                me.current('');

                $.ajax({
                    type: 'POST',
                    url: "/Home/AddItem?SPHostUrl=" + encodeURIComponent(spHostUrl),
                    contentType: "application/json; charset=utf-8",
                    data: JSON.stringify({
                        title: todo.title()
                    }),
                    dataType: "json",
                    success: function (id) {
                        todo.id(id);
                    }
                });
            }
        }
        return true;
    };
    

Next up is the remove function:

    this.remove = function (todo) {
        me.todos.remove(todo);
        $.ajax({
            type: 'POST',
            url: "/Home/RemoveItem?SPHostUrl=" + encodeURIComponent(spHostUrl),
            contentType: "application/json; charset=utf-8",
            data: JSON.stringify({
                id: todo.id()
            }),
            dataType: "json"
        });
    };
    

Now that additions and deletions are being handled, it only remains to handle updates. We will do this by adding code within the addTodo function that listens for changes to the title or completed properties, and submits a change to the server. The updated addTodo function should look like this:

    this.addTodo = function (todo) {
        me.todos.push(todo);
        var adding = true;
        ko.computed(function () {
            var title = todo.title(),
                completed = todo.completed();
            if (!adding) {
                $.ajax({
                    type: 'POST',
                    url: "/Home/UpdateItem?SPHostUrl=" + encodeURIComponent(spHostUrl),
                    contentType: "application/json; charset=utf-8",
                    data: JSON.stringify({
                        id: todo.id(),
                        title: title,
                        completed: completed
                    }),
                    dataType: "json"
                });
            }
        });
        adding = false;
    }
    

In the code above, we add a Knockout computed property which listens to the title and completed observable properties. If either value changes, we make a call to the UpdateItem web service. Note that we use a flag called adding to ensure we don’t call UpdateItems during the initial item addition.

Run the project again. If all has gone to plan, you should hopefully be able to insert, edit and delete items and have them immediately saved back to SharePoint.

Tidying Up

It’s time to do some tidy up to make sure we’re following MVC conventions correctly. Firstly, we should be using the script loader to ensure the JavaScript is loaded as efficiently as possible. Open up App_Start\BundleConfig.cs and replace its contents with the following:

    public static void RegisterBundles(BundleCollection bundles)
    {
        bundles.Add(new ScriptBundle("~/bundles/scripts").Include(
                    "~/Scripts/bower_components/todomvc-common/base.js",
                    "~/Scripts/bower_components/knockout.js/knockout.debug.js",
                    "~/Scripts/jquery-{version}.js",
                    "~/Scripts/js/app.js"));

        bundles.Add(new StyleBundle("~/Content/css").Include(
                    "~/Scripts/bower_components/todomvc-common/base.css"));
    }

This creates two bundles: a CSS bundle and a JavaScript bundle. The advantage of this is that in production, your scripts will be loaded in a single request, and can be easily minified. Update your Index.cshtml file by removing existing script and style references and replace them by referencing your bundles in the <head> tag:

    <head>
        <meta charset="utf-8">
        <title>Knockout.js TodoMVC</title>
        @Styles.Render("~/Content/css")
        @Scripts.Render("~/bundles/scripts")
    </head>
    

Since your app doesn’t use the About or Contact pages which were included in the default project template, you can remove those:

Also, remove the associated methods within the HomeController class.

Adding an App Part (WebPart)

Adding a web part is really easy! You can create an app part to point to any page in your app, and it simply displays in an iframe inside your SharePoint site. In practice you’ll probably want to create a new page. We’re going to create a page that differs slightly in style from the app.

Now, this view will be identical to the main Index view except for styling. We want to avoid copy-pasting the original view into this one, so we’ll created a Shared view that we can use for both.

Start by adding a new View by right-clicking the Views/Home folder and selecting Add -> View: and name it IndexCommon:

Now copy the entire contents of the body tag from Index to IndexCommon. Then you can reference your IndexCommon from Index using a call to @Html.Partial. Your Index.cshtml should look as follows:

    <!doctype html>
    <html lang="en" data-framework="knockoutjs">
    <head>
        <meta charset="utf-8">
        <title>Knockout.js TodoMVC</title>
        @Styles.Render("~/Content/css")
        @Scripts.Render("~/bundles/scripts")
    </head>
    <body>
        @Html.Partial("IndexCommon")
    </body>
    </html>
    

Add a new view called IndexTrimmed:

Again, use the same content for Index. This time however we’re going to make one change, which is to reference an alternative CSS file. So change the @Styles.Render tag as follows:

Now open App_Start/BundleConfig.cs and add a new bundle:

    
    bundles.Add(new StyleBundle("~/Content/css-trimmed").Include(
                "~/Scripts/bower_components/todomvc-common/base-trimmed.css"));
    

And add the css file by copying base.css to base-trimmed.css:

You can make modifications to this CSS file at this point. I deleted the following rules:

  • #todoapp { margin: 130px 0 40px 0; } (the large title at the top)
  • body { margin: 0 auto; } (this caused page centering)
  • background: #eaeaea url(‘bg.png’); (grey background image)

I added these rules:

  • #info { display:none; } (to hide the info footer)

The next step is to add a Controller method for our new View. Open HomeController and add a method called IndexTrimmed. It should hold the exact same content as Index, so I’ve extracted it to a common method:

    [SharePointContextFilter]
    public ActionResult IndexTrimmed()
    {
        return View(GetItems()); //Pass the items into the view
    }

    [SharePointContextFilter]
    public ActionResult Index()
    {
        return View(GetItems()); //Pass the items into the view
    }

    private List<TodoItemViewModel> GetItems()
    {
        List<TodoItemViewModel> result = new List<TodoItemViewModel>();
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        ViewBag.SPHostUrl = spContext.SPHostUrl;
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if (clientContext != null)
            {
                //Load list items
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItemCollection items = list.GetItems(CamlQuery.CreateAllItemsQuery());
                clientContext.Load(items);
                clientContext.ExecuteQuery();
                //Create the Todo item view models
                result = items.ToArray().Select(li => new TodoItemViewModel()
                {
                    Id = (int)li.Id,
                    Title = (string)li["Title"],
                    Completed = (bool)li["Completed"]
                }).ToList();
            }
        }
        return result;
    }

Now we’re ready to add the Web Part. Right click on TodoApp in solution explorer, and select Add -> New Item. Choose Client Web Part:

Give it a name and click Next:. We want to use our new page, so enter its url:

Open the TodoApp/TodoWebPart/Elements.xml file and change the default width from 300px to 600px:

Now run your app again. Make sure the entire app reinstalls, since the app part is installed to the SharePoint server itself. Add the app part by visiting your SharePoint site and clicking Page, then Edit:

Then select Insert -> App Part, and choose the TodoWebPart from the list. Click Add.

Click Save to save your changes to the page, and your web part should be shown:

Making Your Web Part Resize Dynamically

Now, you’ll notice that the Web Part isn’t resizing correctly when you add items to the list. This is because as the app itself gets larger, the App Part doesn’t get larger – it is after all an iframe with a fixed height. This isn’t a problem for apps that don’t resize, for example forms or fixed-length lists. However, we want our app to resize as though it were a normal part of the main website flow.

Luckily, there’s a workaround for this involving posting a message to the iframe and asking it to resize.

Add the following code to your HomeController, in the IndexTrimmed method:

    public ActionResult IndexTrimmed()
    {
        ViewBag.SenderId = HttpContext.Request.Params["SenderId"];
        return View(GetItems());
    }
    

The purpose of the above code is to retrieve the SenderId parameter from the URL, and make it available (via the ViewBag) to the client. This SenderId parameter represents the ID of the iframe which is hosting the app part.

Next, we’ll use the Sender ID in the client to request a resize of the iframe whenever the app resizes – more specifically, whenever an item is added to or removed from the list. This script should be added to the end of the body in IndexTrimmed.cshtml:

    //Retrieve the sender ID from the viewbag
    var senderId = '@ViewBag.SenderId';

    //Create a function that will change the height of the iframe
    function setHeight() {
        var width = $('#todoapp').outerWidth(true),
            height = $('#todoapp').outerHeight(true);

        //Notify SP that it should resize its iframe to the appropriate height.
        window.parent.postMessage('<message senderId=' + senderId + '>resize(' + width + ', ' + (height + 50) + ')</message>', "*");
    }

    //List for changes to the todo list, and call setHeight when that happens
    viewModel.todos.subscribe(setHeight);

    //Set the correct initial height when the app first loads.
    setHeight();
    

For the avoidance of doubt, here’s where this script should go:

The script is intentionally placed below the call to load the IndexCommon partial view, because then it will have access to the JavaScript viewModel object.

Packaging your app

With autohosted apps, the process is simpler: you just right-click on your app and select Publish. From there, you click Package and you are given a .app file. This app file contains everything involved: both the SharePoint app and also the remove app (your Web project). It’s super simple because the autohosting process takes care of creating a Client ID and Client Secret (for OAuth authentiction), and also takes care of physical hosting.

With Provider-hosted apps, things are more complex, and the steps you follow depend on how you want to host your web project. Your first step will be to visit this page and create a Client ID and Client Secret by registering your app. Once you’ve done that, you can right-click your app project and click Package. Then follow the publish wizard which will involve entering your Client ID and Client Secret.

Note that with your provider hosted app, you will deploy your Web project separately from your App project. This is obvious really: you’re going to be hosting your Web project yourself (you’re the provider, after all), and the small app project will be packaged up and listed on the Office Store or sent manually to a customer. Once they install it, it’ll just point to your web server where your web app is installed. To configure this properly, open AppManifest.xml in the App project, and enter the URL to where your app is installed:

How To : Using the Proxy Pattern to implement Code Access Security

There are many ways to secure different parts of your application. The security of running code in .NET revolves around the concept of Code Access Security (CAS).

CAS

CAS determines the trustworthiness of an assembly based upon its origin and the characteristics of the assembly itself, such as its hash value. For example, code installed locally on the machine is more trusted than code downloaded from the Internet. The runtime will also validate an assembly’s metadata and type safety before that code is allowed to run.

 

There are many ways to write secure code and protect data using the .NET Framework. In this chapter, we explore such things as controlling access to types, encryption and decryption, random numbers, securely storing data, and using programmatic and declarative security.

 

Controlling Access to Types in a Local Assembly

 

Problem

You have an existing class that contains sensitive data, and you do not want clients to have direct access to any objects of this class. Instead, you want an intermediary object to talk to the clients and to allow access to sensitive data based on the client’s credentials. What’s more, you would also like to have specific queries and modifications to the sensitive data tracked, so that if an attacker manages to access the object, you will have a log of what the attacker was attempting to do.

Solution

Use the proxy design pattern to allow clients to talk directly to a proxy object. This proxy object will act as gatekeeper to the class that contains the sensitive data. To keep malicious users from accessing the class itself, make it private, which will at least keep code without the ReflectionPermissionFlag. MemberAccess access (which is currently given only in fully trusted code scenarios such as executing code interactively on a local machine) from getting at it.

The namespaces we will be using are:

 
 
  using System;
    using System.IO;
    using System.Security;
    using System.Security.Permissions;
    using System.Security.Principal;

Let’s start this design by creating an interface, as shown in Example 17-1, “ICompanyData interface”, that will be common to both the proxy objects and the object that contains sensitive data.

Example 17-1. ICompanyData interface

 
 
internal interface ICompanyData
{
    string AdminUserName
    {
        get;
        set;
    }

    string AdminPwd
    {
        get;
        set;
    }
    string CEOPhoneNumExt
    {
        get;
        set;
    }
    void RefreshData();
    void SaveNewData();
}

The CompanyData class shown in Example 17-2, “CompanyData class” is the underlying object that is “expensive” to create.

Example 17-2. CompanyData class

 
 
internal class CompanyData : ICompanyData
{
    public CompanyData()
    {
        Console.WriteLine("[CONCRETE] CompanyData Created");
        // Perform expensive initialization here.
        this.AdminUserName ="admin";
        this.AdminPd ="password";

        this.CEOPhoneNumExt ="0000";
    }

    public str ing AdminUserName
    {
        get;
        set;
    }

    public string AdminPwd
    {
        get;
        set;
    }

    public string CEOPhoneNumExt
    {
        get;
        set;
    }

    public void RefreshData()
    {
        Console.WriteLine("[CONCRETE] Data Refreshed");
    }

    public void SaveNewData()
    {
        Console.WriteLine("[CONCRETE] Data Saved");
    }
}

The code shown in Example 17-3, “CompanyDataSecProxy security proxy class” for the security proxy class checks the caller’s permissions to determine whether the CompanyData object should be created and its methods or properties called.

Example 17-3. CompanyDataSecProxy security proxy class

 
 
public class CompanyDataSecProxy : ICompanyData
{
    public CompanyDataSecProxy()
    {
        Console.WriteLine("[SECPROXY] Created");

        // Must set principal policy first.
        appdomain.CurrentDomain.SetPrincipalPolicy(PrincipalPolicy.
           WindowsPrincipal);
    }

    private ICompanyData coData = null;
    private PrincipalPermission admPerm =
        new PrincipalPermission(null, @"BUILTIN\Administrators", true);
    private PrincipalPermission guestPerm =
        new Pr incipalPermission(null, @"BUILTIN\Guest", true);
    private PrincipalPermission powerPerm =
        new PrincipalPermission(null, @"BUILTIN\PowerUser", true);
    private PrincipalPermission userPerm =
        new PrincipalPermission(null, @"BUILTIN\User", true);

    public string AdminUserName
    {
        get
        {
            string userName = ";
            try
            {
                admPerm.Demand();
                Startup();
                userName =coData.AdminUserName;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminUserName_get failed! {0}",e.ToString());
            }
            return (userName);
        }
        set
            {
            try
            {
                admPerm.Demand();
                Startup();
                coData.AdminUserName = value;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminUserName_set failed! {0}",e.ToString());
            }
        }
    }

    public string AdminPwd
    {
        get
        {
            string pwd = ";
            try
            {
                admPerm.Demand();
                Startup();
                pwd = coData.AdminPwd;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminPwd_get Failed! {0}",e.ToString());
            }

            return (pwd);
        }
        set
        {
            try
            {
                admPerm.Demand();
                Startup();
                coData.AdminPwd = value;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminPwd_set Failed! {0}",e.ToString());
            }
        }
    }

    public string CEOPhoneNumExt
    {
        get
        {
            string ceoPhoneNum = ";
            try
            {
                admPerm.Union(powerPerm).Demand();
                Startup();
                ceoPhoneNum = coData.CEOPhoneNumExt;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("CEOPhoneNum_set Failed! {0}",e.ToString());
            }
            return (ceoPhoneNum);
        }
        set
        {
            try
            {
                admPerm.Demand();
                Startup();
                coData.CEOPhoneNumExt = value;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("CEOPhoneNum_set Failed! {0}",e.ToString());
            }
        }
    }
    public void RefreshData()
    {
        try
        {
            admPerm.Union(powerPerm.Union(userPerm)).Dem and();
            Startup();
            Console.WriteLine("[SECPROXY] Data Refreshed");
            coData.RefreshData();
        }
        catch(SecurityException e)
        {
            Console.WriteLine("RefreshData Failed! {0}",e.ToString());
        }
    }

    public void SaveNewData()
    {
        try
        {
            admPerm.Union(powerPerm).Demand();
            Startup();
            Console.WriteLine("[SECPROXY] Data Saved");
            coData.SaveNewData();
        }
        catch(SecurityException e)
        {
            Console.WriteLine("SaveNewData Failed! {0}",e.ToString());
        }
    }

    // DO NOT forget to use [#define DOTRACE] to control the tracing proxy.
    private void Startup()
    {
        if (coData == null)
        {
#if (DOTRACE)
            coData = new CompanyDataTraceProxy();
#else
            coData = new CompanyData();
#endif
            Console.WriteLine("[SECPROXY] Refresh Data");
            coData.RefreshData();
        }
    }
}

When creating thePrincipalPermissions as part of the object construction, you are using string representations of the built-in objects (“BUILTIN\Administrators”) to set up the principal role. However, the names of these objects may be different depending on the locale the code runs under. It would be appropriate to use the WindowsAccountType.Administrator enumeration value to ease localization because this value is defined to represent the administrator role as well. We used text here to clarify what was being done and also to access the PowerUsers role, which is not available through the WindowsAccountType enumeration.

If the call to the CompanyData object passes through the CompanyDataSecProxy, then the user has permissions to access the underlying data. Any access to this data may be logged, so the administrator can check for any attempt to hack the CompanyData object. The code shown in Example 17-4, “CompanyDataTraceProxy tracing proxy class” is the tracing proxy used to log access to the various method and property access points in the CompanyData object (note that the CompanyDataSecProxy contains the code to turn this proxy object on or off).

Example 17-4. CompanyDataTraceProxy tracing proxy class

 
 
public class CompanyDataTraceProxy : ICompanyData
{    
    public CompanyDataTraceProxy() 
    {
        Console.WriteLine("[TRACEPROXY] Created");
        string path = Path.GetTempPath() + @"\CompanyAccessTraceFile.txt";
        fileStream = new FileStream(path, FileMode.Append,
            FileAccess.Write, FileShare.None);
        traceWriter = new StreamWriter(fileStream);
        coData = new CompanyData();
    }

    private ICompanyData coData = null;
    private FileStream fileStream = null;
    private StreamWriter traceWriter = null;

    public string AdminPwd
    {
        get
        {
            traceWriter.WriteLine("AdminPwd read by user.");
            traceWriter.Flush();
            return (coData.AdminPwd);
        }
        set
        {
            traceWriter.WriteLine("AdminPwd written by user.");
            traceWriter.Flush();
            coData.AdminPwd = value;
        }
    }

    public string AdminUserName
    {
        get
        {
            traceWriter.WriteLine("AdminUserName read by user.");
            traceWriter.Flush();
            return (coData.AdminUserName);
        }
        set
        {
            traceWriter.WriteLine("AdminUserName written by user.");
            traceWriter.Flush(); 
            coData.AdminUserName = value;
        }
    }

    public string CEOPhoneNumExt
    {
        get
        {
            traceWriter.WriteLine("CEOPhoneNumExt read by user.");
            traceWriter.Flush();
            return (coData.CEOPhoneNumExt);
        }
        set
        {
            traceWriter.WriteLine("CEOPhoneNumExt written by user.");
            traceWriter.Flush();
            coData.CEOPhoneNumExt = value;
        }
    }

    public void RefreshData()
    {
        Console.WriteLine("[TRACEPROXY] Refresh Data");
        coData.RefreshData();
    }

    public void SaveNewData()
    {
        Console.WriteLine("[TRACEPROXY] Save Data");
        coData.SaveNewData();
    }
}

The proxy is used in the following manner:

 
 
   // Create the security proxy here.
    CompanyDataSecProxy companyDataSecProxy = new CompanyDataSecProxy( );

    // Read some data.
    Console.WriteLine("CEOPhoneNumExt: " + companyDataSecProxy.CEOPhoneNumExt);

    // Write some data.
    companyDataSecProxy.AdminPwd = "asdf";
    companyDataSecProxy.AdminUserName = "asdf";

    // Save and refresh this data.
    companyDataSecProxy.SaveNewData( );
    companyDataSecProxy.RefreshData( );

Note that as long as the CompanyData object was accessible, you could have also written this to access the object directly:

 
 
    // Instantiate the CompanyData object directly without a proxy.
    CompanyData companyData = new CompanyData( );

    // Read some data.
    Console.WriteLine("CEOPhoneNumExt: " + companyData.CEOPhoneNumExt);

    // Write some data.
    companyData.AdminPwd = "asdf";
    companyData.AdminUserName = "asdf";

    // Save and refresh this data.
    companyData.SaveNewData();
    companyData.RefreshData();

If these two blocks of code are run, the same fundamental actions occur: data is read, data is written, and data is updated/refreshed. This shows you that your proxy objects are set up correctly and function as they should.

Discussion

The proxy design pattern is useful for several tasks. The most notable-in COM, COM+, and .NET remoting-is for marshaling data across boundaries such as AppDomains or even across a network. To the client, a proxy looks and acts exactly the same as its underlying object; fundamentally, the proxy object is just a wrapper around the object.

 

A proxy can test the security and/or identity permissions of the caller before the underlying object is created or accessed. Proxy objects can also be chained together to form several layers around an underlying object. Each proxy can be added or removed depending on the circumstances.

 

For the proxy object to look and act the same as its underlying object, both should implement the same interface. The implementation in this recipe uses an ICompanyData interface on both the proxies (CompanyDataSecProxy and CompanyDataTraceProxy) and the underlying object (CompanyData). If more proxies are created, they, too, need to implement this interface.

 

The CompanyData class represents an expensive object to create. In addition, this class contains a mixture of sensitive and nonsensitive data that requires permission checks to be made before the data is accessed. For this recipe, the CompanyData class simply contains a group of properties to access company data and two methods for updating and refreshing this data. You can replace this class with one of your own and create a corresponding interface that both the class and its proxies implement.

 

The CompanyDataSecProxy object is the object that a client must interact with. This object is responsible for determining whether the client has the correct privileges to access the method or property that it is calling. The get accessor of the AdminUserName property shows the structure of the code throughout most of this class:

 
 
  public string AdminUserName
    {
        get
        {
            string userName = ";
            try
            {
                admPerm.Demand( );
                Startup( );
                userName = coData.AdminUserName;
            }
            catch(SecurityException e)
            {
               Console.WriteLine("AdminUserName_get ailed!: {0}",e.ToString( ));
            }
            return (userName);
        }
        set
        {
            try
            {
                admPerm.Demand( );
                Startup( );
                coData.AdminUserName = value;
            }
            catch(SecurityException e)
            {
               Console.WriteLine("AdminUserName_set Failed! {0}",e.ToString( ));
            }
        }
    }

Initially, a single permission (AdmPerm) is demanded. If this demand fails, a SecurityException, which is handled by the catch clause, is thrown. (Other exceptions will be handed back to the caller.) If the Demand succeeds, the Startup method is called. It is in charge of instantiating either the next proxy object in the chain (CompanyDataTraceProxy) or the underlying CompanyData object. The choice depends on whether the DOTRACE preprocessor symbol has been defined. You may use a different technique, such as a registry key to turn tracing on or off, if you wish.

 

This proxy class uses the private field coData to hold a reference to an ICompanyData type, which can be either a CompanyDataTraceProxy or the CompanyData object. This reference allows you to chain several proxies together.

In the marketSenior C# & SharePoint Developer in the market

10 years experience in various industries

BSC Degree in Computer Science (Cum Laude)

tomas.floyd@outlook.com

https://sharepointsamurai.wordpress.com

 

The CompanyDataTraceProxy simply logs any access to the CompanyData object’s information to a text file. Since this proxy will not attempt to prevent a client from accessing the CompanyData object, the CompanyData object is created and explicitly called in each property and method of this object.

How To : Use Azure BizTalk Services to Integrate with an On-Premises SAP Server

biztalk_adapter_2004-1[1]hero-for-hire_basic-layout_600

Microsoft Azure BizTalk Services provides a rich set of integration capabilities enabling organizations to create hybrid solutions such that their customer or partner facing applications are hosted on Azure, while the data related to customers or partners is stored on-premises using LOB applications.

To demonstrate how to integrate applications with an on-premises LOB application using BizTalk Services, let us consider a scenario involving two business partners, Fabrikam and Contoso.

Business Scenario

Contoso sends a purchase order (PO) message to Fabrikam in an X12 Electronic Data Interchange (EDI) format using the PO (X12 850) schema. Fabrikam (that uses an SAP Server to manage partner data), accepts PO from its partners using the ORDERS05 IDOCS.

To enable Contoso to send a PO directly to Fabrikam’s on-premises SAP Server, Fabrikam decides to use Azure’s integration offering, BizTalk Services, to set up a hybrid integration scenario where the integration layer is hosted on and the SAP Server is within the organization’s firewall. Fabrikam uses BizTalk Services in the following ways to enable this hybrid integration scenario:

  1. Fabrikam uses Microsoft Azure BizTalk Services SDK to create a BizTalk Service project. The project includes a XML One-Way Bridge to send messages to a relay endpoint, which in turns sends the message to the on-premise SAP system.
  2. Fabrikam uses the BizTalk Adapter Service component available with BizTalk Services to expose the Send operation on ORDERS05 IDOC as an operation using Service Bus relay endpoint. The XML One-Way Bridge sends messages to this relay endpoint. Fabrikam also creates the schema for Send operation using BizTalk Adapter Service and includes the schema as part of the BizTalk Service project.
    noteNote
    A Send operation on an IDOC is an operation that is exposed by the BizTalk Adapter Pack on any IDOC to send the IDOC to the SAP Server. BizTalk Adapter Service uses BizTalk Adapter Pack to connect to an SAP Server.
  3. Fabrikam uses the Transform component available with BizTalk Services to create a map to transform the PO message in X12 format into the schema required by the SAP Server to invoke the Send operation on the ORDERS05 IDOC.
  4. Fabrikam uses the Microsoft Azure BizTalk Services Portal available with BizTalk Services to create and deploy an EDI agreement under the BizTalk Services subscription that processes the X12 850 PO message. As part of the message processing, the agreement also does the following:
    1. Receives an X12 850 PO message over FTP.
    2. Transforms the X12 PO message into the schema required by the SAP Server using the transform created earlier.
    3. Routes the transformed message to the XML One-Way Bridge that eventually routes the message to a relay endpoint created for sending a PO message to an SAP Server. Fabrikam earlier exposed (as explained in bullet 1 above) the Send operation on ORDERS05 IDOC as a relay endpoint, to enable partners to send PO messages using BizTalk Adapter Service.

Once this is set up, Contoso drops an X12 850 PO message to the FTP location. This message is consumed by the EDI receive pipeline, which processes the message, transforms it to an ORDERS05 IDOC, and routes it to the intermediary XML bridge. The bridge then routes the message to the relay endpoint on Service Bus, which is then sent to the on-premises SAP Server. The following illustration represents the same scenario.

SAP Integraiton scenario

How to Use This Article

 

This tutorial is written around a sample, SAPIntegration, which is available as part of the download (SAPIntegration.zip) from the MSDN Code Gallery. You could either use the SAPIntegration sample and go through this tutorial to understand how the sample was built or just use this tutorial to create your own application.

This tutorial is targeted towards the second approach so that you get to understand how this application was built. Also, to be consistent with the sample, the names of artifacts (e.g. schemas, transforms, etc.) used in this tutorial are same as that in the sample.

The sample available from the MSDN code gallery contains only half the solution, which can be developed at design-time on your computer. The sample cannot include the configuration that you must do on the BizTalk Services Portal on Azure.

For that, you must follow the steps in this tutorial to set up your EDI bridge. Even though Microsoft recommends that you follow the tutorial to best understand the concepts and procedures, if you really wish to use the sample, this is what you should do:

  • Download the SAPIntegration.zip package, extract the SAPIntegration sample and make relevant changes like providing your service namespace, issuer name, issuer key, SAP Server details, etc. After changing the sample, deploy the application to get the endpoint URL at which the XML One-Way Bridge is deployed.
  • Use the BizTalk Services Portal to configure the Receive settings as described at Step 5: Create and Deploy the EDI Receive Pipeline and follow the procedures to route messages from the EDI Receive bridge to the XML One-Way Bridge you already deployed.
  • Drop a test message at the FTP location configured as part of the agreement and verify that the application works as expected.
    • If the message is successfully processed, it will be routed to the SAP Server and you can verify the ORDERS IDOC using the SAP GUI.
    • If the EDI agreement fails to process the message, the failure/error messages are routed to a relay endpoint on Service Bus. To receive such messages, you must set up a relay receiver service that receives any message that comes to that specific relay endpoint. More details on why you need this service and how to use it are available at Step 6: Test the Solution.

Steps in the Solution :

 

  • Step 1: Set up Your Computer
  • Step 2: Expose a Relay Endpoint to Invoke Operations on ORDERS05 IDOC
  • Step 3: Transform the X12 850 PO Message to the ORDERS05 Message
  • Step 4: Create and Deploy the XML Bridge
  • Step 5: Create and Deploy the EDI Receive Pipeline
  • Step 6: Test the Solution

Step 1: Set up Your Computer


This topic provides you with instruction and pointers to set up your computer on which you will perform the steps to set up the hybrid integration scenario described in Tutorial: Using Azure BizTalk Services to Integrate with an On-Premises SAP Server. You must do the following to set up your computer:

  • Install Microsoft Azure BizTalk Services SDK. You can download the installer from http://go.microsoft.com/fwlink/?LinkId=235057. You use this SDK to configure and deploy the XML One-Way Bridge that sits between the EDI agreement and the relay endpoint.
  • Install BizTalk Adapter Service. You use this to expose the Send operation on an IDOC as a relay endpoint on Service Bus.You can download the installer from http://go.microsoft.com/fwlink/?LinkId=235057. Refer to the BizTalk Services installation guide at http://go.microsoft.com/fwlink/?LinkId=237092 to install the software prerequisites for BizTalk Services SDK and BizTalk Adapter Service.
  • Install the WCF LOB Adapter SDK. This is required for installing the Adapter Pack on the computer.
  • Install the Adapter Pack. This contains the SAP adapter that is required to establish connectivity to an SAP Server and for exposing SAP artifacts as operations.
  • Install the SAP client libraries. The SAP adapter needs these libraries to connect to an SAP Server. For information on where to install the SAP client libraries from, refer to the Adapter Pack installation guide (BizTalkAdapterPack_InstallationGuide.htm) at http://go.microsoft.com/fwlink/?LinkId=240161.
  • Download and extract the EDI message schemas (MicrosoftEdiXSDTemplates.zip) available at http://go.microsoft.com/fwlink/?LinkId=235057. This contains the X12 850 Purchase Order message schema that is required for the business scenario we use for this article.

After you have finished installing and downloading these components, you are ready to start setting up the business scenario.

Step 2: Expose a Relay Endpoint to Invoke Operations on ORDERS05 IDOC

This topic has not yet been rated Rate this topic

Updated: November 21, 2013

There are two main steps required to expose an SAP artifact as an operation that can be invoked by sending a message over Service Bus – create an LOB Target and an LOB Relay.

  • An LOB Target defines how an Azure application communicates to the Line-of-Business (LOB) system. The LOB Target controls the LOB system connection URI, the operation to perform, and the connection credentials.
  • An LOB Relay is a WCF service running within an organizations firewall and listens to a relay endpoint on the Service Bus. As the name suggests, the LOB Relay acts as a relay between the Service Bus relay endpoint and the LOB system. It receives the message at the Service Bus relay endpoint and passes it on to the relevant LOB system using the LOB Target configuration.

For more information, see BizTalk Adapter Service Architecture. In this topic, we will create an LOB Target and an LOB Relay to expose the Send operation on the ORDERS05 IDOC.

To create an LOB Target and LOB Relay

  1. Open Visual Studio (as an administrator), create a new BizTalk Service project, and name it SAPIntegration.
  2. You first start with adding a BizTalk Adapter Service server. This is the server where you installed the Runtime component of BizTalk Adapter Service. To add a BizTalk Adapter Service server, from the Server Explorer in Visual Studio, right-click BizTalk Adapter Services, and select Add BizTalk Adapter Service. In the Add BizTalk Adapter Service dialog box, enter the URL of the WCF service that monitors that Service Bus relay service, and then click OK.

    Add Service Bus Connect ServerBecause you have all the components of BizTalk Adapter Service installed on the same computer, the URL for that service will be http://localhost:8080/BAService/ManagementService.svc/.

    noteNote
    If you had installed BizTalk Adapter Service Runtime component on a separate computer, you would have replaced ‘localhost’ in the above URL with the name of that computer.
  3. In this tutorial we are creating an application to integrate with SAP, so we must add an SAP target. Expand the newly added server, expand LOB Types, right-click SAP, and select Add SAP Target.

    Add an SAP TargetThe Add a Target wizard starts. Perform the following steps to create an LOB Target.

    1. Read the information on the Before You Begin page, and then click Next.
    2. On the Connection Parameters page, specify the details for the SAP Server to connect to and the credentials to use for the connection. Click Next.
    3. On the Operations page, expand the ORDERSO5 IDOC category (under \IDOC\ORDERS\). There are several versions of the IDOC available. For this tutorial, we’ll select ORDERS05.V3(700). Expand this IDOC, select Send, and then click the right arrow to add it to the Selected Operations box.

      Add Send operation for IDOCClick Next.

    4. In the Runtime Security page, specify the security mechanism to be used by the LOB Server to authenticate the target resource when a message arrives from a client. For this tutorial, select Fixed Username and specify the credentials to connect to the SAP server.
    5. On the Deployment page, you create an LOB Relay and an LOB Target to provide connectivity to your on-premise LOB applications from the cloud.

      Select the Create new option to create a new relay and provide the following values:

      Name Description
      Namespace Specify the Service Bus namespace on which the LOB relay endpoint is created.
      Issuer name Specify the issuer name for the Service Bus namespace
      Issuer secret Specify the issuer secret for the Service Bus namespace
      Relay path Specify a name for the relay. For this tutorial, enter sapintegration01.
      Target sub-path Enter a sub-path to make this target unique. For this tutorial, enter orders.

      The Target runtime URL read-only property displays the URL where the relay is deployed on Service Bus. This is the path where you could send a message to be inserted into the on-premises SAP Server. In our scenario, this is where the bridge sends the message.

      Click Next.

    6. On the Summary page, review the values you specified in the previous steps, and then click Create.
    7. When the wizard completes, click Finish.

      In Visual Studio Server Explorer, you now have an entry under the SAP node. This represents the relay endpoint created in Service Bus to relay PO messages coming from the cloud to the on-premises SAP system.

To add schemas

  1. After adding the relay endpoint to an SAP system, you must add schemas that to send ORDERS05 PO messages to the SAP server. To add the schemas, right-click the relay endpoint and select Add schemas to SAPIntegration. In the dialog box, do the following:
    • Enter a filename prefix that will be included in the name of each schema file that is generated. For this tutorial, specify this as SAPIntegration_.
    • Enter a folder name that will be added to your solution under which all the schemas will be added. For this tutorial, specify the folder name as LOB Schemas.
    • Enter the credentials to connect to an SAP system.

    Add schemas to the projectClick OK. The schemas are added to the project under an LOB Schemas folder.

To use the LOB Target

  1. Right-click anywhere on the BizTalk Service project design surface, select Properties and update the BizTalk Service URL property to include your BizTalk Services name. This is the name that you provided in Azure Management Portal while provisioning the BizTalk Services.
  2. Set the security property for the relay endpoint.
    1. Right-click the LOB Target in Server Explorer and select Properties.
    2. In the Properties grid, click the ellipsis (…) against the Runtime Security property.
    3. In the Edit Security dialog box, select Fixed Username and specify username and password to connect to the SAP Server.
    4. Click OK.
  3. Drag and drop the LOB Target onto the design surface. Note the Entity Name property of the LOB Target. The default value is Relay-Path_target-sub-path. If using the examples above, it will be sapintegration01_orders.
  4. Open the .config file for the LOB Target, which typically has the naming convention as YourRelayPath_target-sub-path.config. Specify the Service Bus issuer name and issuer secret, as shown below:
      <sharedSecret issuerName="owner" issuerSecret="issuer_secret" />
    
    

    Save changes to the config file.

 

Step 3: Transform the X12 850 PO Message to the ORDERS05 Message


Both the X12 850 schemas and ORDERS05 schemas are pretty complex and require functional expertise in the respective domains to understand and create maps between the two schemas.

While you already generated the schema for ORDERS05 IDOC, you can get the schema for X12 PO message (X12_00401_850.xsd) from the MicrosoftEdiXSDTemplates.zip that you must have downloaded and extracted before. You must add the X12_00401_850.xsd schema as well to the SAPIntegration project.

Creating a transform between the X12 850 PO and ORDERS05 PO requires functional domain knowledge of both the X12 schema and the ORDERS05 schema.

Only then one can identify which field in the X12 schema maps to which field in the ORDERS05 schema. In this tutorial, we do not get into such details and instead use an existing transform (AzureTransformations.trfm) between these two schemas. This transform is available as part of the SAPIntegration project that you can download from the MSDN Code Gallery.

To include the transform in the BizTalk Service project, right-click the project name, click Add, click Existing Items, and then navigate to the location where you downloaded the SAPIntegration sample from the MSDN Code Gallery. Select the AzureTransformations.trfm and then click Add.

Step 4: Create and Deploy the XML Bridge


In this topic, you create an XML One-Way Bridge that will act as a connector between the EDI Receive bridge and the relay endpoint for the ORDERS05 IDOC in SAP. After configuring the bridge, you connect it to the SAP relay endpoint, and then deploy the solution.

To configure the XML Bridge

  1. In the SAPIntegration project, from the Solution Explorer, double-click the MessageFlowItinerary.bcs file to open the bridge configuration surface.
  2. Right-click anywhere on the BizTalk Service project design surface, select Properties, and update the BizTalk Service URL property to include your BizTalk Services name. This is the name that you provided in Azure Management Portal while provisioning the BizTalk Services.
  3. From the Toolbox, drag and drop the XML One-Way Bridge component to the bridge design surface.
  4. Right-click the XML One-Way Bridge, select Properties, and change the value for Entity Name and Relative Address properties to B2BConnector. As a result, the complete endpoint URL where the bridge is deployed, which is shown in the Runtime Address property, will resemble https://<mybiztalkservicename>.biztalk.windows.net/default/B2BConnector. This is where the EDI Receive bridge sends the ORDERS05 PO message.
  5. Double-click the XML One-Way Bridge to open the Bridge Configuration design surface. Because this bridge only routes the message from the EDI Receive bridge to the relay endpoint, there’s not much configuration required for each stage in the bridge stage other than specifying the message types of the message that this bridge routes. To specify the message type, on the XML One-Way Bridge design surface, within the Message Types box, click the add icon [ Add icon ] to open the Message Type Picker dialog box.
  6. In the Message Type Picker dialog box, from the Available message types box, select the schema for the request message and then click the right arrow icon [ Arrow Icon ], and then click OK. For this tutorial, select the Send schema (http://Microsoft.LobServices.Sap/2007/03/Idoc/3/ORDERS05//700/Send). The selected schema should now be listed under the Request Message Type box.
  7. Save the bridge configuration.

To connect the bridge to the relay endpoint

  1. In the SAPIntegration project, from the Toolbox, select the Connection component, and connect the XML One-Way Bridge component with the SAP relay endpoint you already added in Step 2: Expose a Relay Endpoint to Invoke Operations on ORDERS05 IDOC.
  2. Set the filter condition on the connection. The routing condition for this scenario is to route all messages to the LOB Target. To do so, select the connecting line, and from the Properties grid, click the ellipsis (…) against the Filter Condition property, and then select Match All. This ensures that all messages that come to the bridge are routed to the relay endpoint.
  3. Set the Route Action property on the connection. Before you set the route action, we must understand why it is required. The message sent from the EDI receive bridge to the relay endpoint must have the Action SOAP header set on it. This header defines what operation must be performed on the SAP system. The message that comes from the EDI receive pipeline does not have this header set. Hence, in this intermediary XML bridge, you set the route action on the message before it is sent the relay endpoint. As part of the route action, you add the required header on the message. Perform the following steps to set the route action.
    1. Find out the value that will be set for the Action SOAP header message. To do so, right-click the SAP relay endpoint from the Server explorer, and from the Properties grid, expand Operations, and copy the value. For this tutorial, the value is http://Microsoft.LobServices.Sap/2007/03/Idoc/3/ORDERS05//700/Send.

      Value for SOAP action

    2. Go back to the bridge configuration surface, select the connection between the bridge and the SAP relay, and from the Properties grid, click the ellipsis (…) against the Route Action property. In the Route Actions dialog box, click Add to open the Add Route Action dialog box. In the Add Route Action dialog box, do the following:
      • Under Property (Read From) section, select Expression and specify the value that you copied earlier.
        ImportantImportant
        Make sure you specify the value for Expression within single quotes.
      • Under Destination (Write-To) section, set the Type to SOAP and the Identifier to Action.

        Set Route Action

      • Click OK in the Add Route Action dialog box to add the route action. Click OK in the Route Actions dialog box and then click Save to save changes to an Enterprise Application Integration project.
  4. Save the project. The final bridge configuration resembles the following:

    Completed bridge configuration

To deploy the solution

  1. In Visual Studio, right click the SAPIntegration solution, and then click Build Solution.
  2. Once the build succeeds, right click the SAPIntegration solution, and then click Deploy Solution.
  3. In the deployment window, the Deployment Endpoint is a read-only property and the value is derived from the BizTalk Service URL/Namespace set in the message flow surface. However, you must provide the ACS Namespace for BizTalk Services, Issuer Name, and Shared Secret.
  4. Click Deploy. The Visual Studio Output pane displays the deployment progress and result. The URL where the bridge is deployed is also displayed in the Output pane. For this tutorial, the bridge is deployed at http://<mybiztalkservicename>.biztalk.windows.net/default/B2BConnector.

 

 

HTML5 SharePoint Pic Web Part Released and Available !!

This is a Sandbox web part control to display a matrix of image thumbnails.

For a build a Metro IDE or a Picture Gallery to show products, news, or a social team that integrates with pictures, etc. All this, from any SharePoint picture library.

Supports : SharePoint 2010 & 2013 On-Premise Web Part,  SharePoint Online Web Part

FEATURES OF THE WEB PART** ver. 1.0

     

**PREVIEW EXAMPLE OF THE CONTROL**





 
1

A Look At : Visual Studio 2013 Update 3 CTP2

avatar[2]

New technology improvements in Visual Studio 2013 Update 3 CTP 2

 

Technology improvements

The following technology improvements were made in this release.

CodeLens

  • CodeLens jobs that are running on the Team Foundation Server job agent have been optimized for performance specifically while processing branching and merging changesets.

Debugger

  • If you have more than one monitor, Visual Studio will remember which monitor a Windows Store application was last run on.
  • You can debug x86 applications that are built by .NET native.
  • When you analyze managed memory dump files, you can go to Definition and Find All References of the selected type.
  • You can debug the dump files from .NET Native applications by using Visual Studio debugger.

General

  • The Application Insights Tools for Visual Studio are now included in Visual Studio 2013 Update 3 CTP2. This initial integration as part of CTP2 includes some software updates and performance improvements.

IntelliTrace

  • You can skip straight to the details of performance events that are exported from Application Insights to IntelliTrace.

Profiler

  • The Performance and Diagnostics hub can open profiling sessions (.diagsession files) that were exported from the F12 tools in the latest developer preview of Internet Explorer 11.
  • Windows Presentation Foundation (WPF) and Win32 applications are supported by the new Memory Usage Tool in the Performance and Diagnostics Hub. For more information about how to use the tool to troubleshoot issues in native and managed memory, go to the following blog post:
    Diagnosing memory issues with the new Memory Usage Tool in Visual Studio

Release Management

  • You can useWindowsPowerShell or theWindowsPowerShell Desired State Configuration (DSC) feature to deploy and manage configuration data. Additionally, you can deploy to the following environments without having to set up Microsoft Deployment Agent:
    • Microsoft Azure environments
    • On-premise environments (Standard environments)

Testing Tools

  • You can add custom fields and custom work flows for test plans and test suites.
  • You can use Manage Test Suites permission for granting access to test suites.
  • You can track changes to test plans and test suites by using work item history.

For more information about these features, see the following Visual Studio Developer Tools blog article:

Test Plan and Test Suite Customization with TFS2013 Update3

Visual Studio IDE

  • CodeLens authors and changes indicators are now available for Git repositories.
  • In Code Map, links are styled by using colors, and they display in the improved Legend.
  • Debugger Map automatically zooms to the call stack entry of interest and preserves user’s zoom preferences.
  • You can drag binaries from the Windows file explorer to a code map, and then start exploring binaries by using Code Map.

Known issues

Testing Tools

  • When you try to upgrade an existing TFS server that has Test management data to Visual Studio 2013 Team Foundation Server Update 3 CTP2 in JPN or CHS, the upgrade of Test Case Management service does not work.

Visual Studio IDE

  • In Visual Studio 2013 Ultimate Update 3 CTP2 localized (non en-us) drops, when trying to request a Code Map, or a Dependency Graph for the solution, the directed graph is not produced.

 

For more information on Visual Studio 2013 and other upgrades, visit http://support.microsoft.com/kb/2933779/en-us

FREE Web Part – Random “Quote of the day” SP 2010 Web Part

The “Random Quote of the Day” Web Part randomly selects a quote from the specified Sharepoint list or from the selected RSS feed.

A timer can then be set and the web part will read a new, random post and place it within the web part.

It is great for team/company motivation, to display code snippets in on a Team or KB Site – Your imagination is the limit.

A “Starter” Excel list containing quotes for a quick start is supplied with the download package.

Image

The Web Part can be used with Sharepoint 2010.

You can configure the following web part properties: the Sharepoint list the Sharepoint list column or the RSS Feed URL for external tips enable or suppress the daily calendar display show an optional picture or calendar show a tip every day or on every page refresh configure CSS settings for individual formatting

 

Contact me at tomas.floyd@outlook.com for this cool free web part – Totally free of charge

How To : Instrument Your Applications in Azure

Overview

Instrumenting your application code should be one of the key Non-Functional Requirements for consideration while building applications in Azure, specifically if they are multi-tenant SaaS-based applications.

windows-azure-c3634

Depending on the size and nature of your application, the following areas of instrumentation should be considered:

  1. Diagnostic Logging
    1. Exception Log
    2. Verbose Log
  2. Custom Performance Metrics
  3. Application Performance Metrics
  4. Service Metering

Diagnostics

Diagnostic Logging in very simple terms could be logging all exceptions that occur in your application. You can create a very simple LogService implementation using log4net. The service can expose methods like Debug, Error, Fatal, Warn, and Info to address various instrumentation needs. The log4net, ILog interface exposes capabilities for each of these as shown in the code below

readonly ILog _logger;
 
        static LogService()
        {
            XmlConfigurator.Configure();
        }
 
        public LogService(Type logClass)
        {
            _logger = LogManager.GetLogger(logClass);
        }
 
        /// 
        /// Log a message object with the Fatal level.
        /// 
        ///The error message.
        public void Fatal(string errorMessage)
        {
            if (_logger.IsFatalEnabled)
                _logger.Fatal(errorMessage);
        } 

InstrumentationFig2[1]

In addition to logging the exceptions caught in the catch block, it is a good idea to provide support for verbose logging and enable it for capturing detailed code flow if needed. To capture verbose log, you must instrument every method entry, including the parameters passed and return values. Important business rules within methods should also be instrumented as deemed necessary.

You can use the Info method to create specific logging entries such as “Method Entry”, “Method Exit” with each of these entries augmented by the parameters passed and returned respectively.

Note that Windows events are captured and logged in the WADLogsTable and WADWindowsEventsLogsTable table stores. They provide additional diagnostic information into issues occurring in the configured roles.

Performance Metrics

Performance metrics are usually twofold. You can use the Telemetry SDK to create custom KPI to be monitored using App Insights. If you do not have App Insights, you can actually create a simple performance monitor for your modules, using a StopWatch and capturing the ElapsedMilliseconds for the operations you intend to participate in a performance session. The key here would be, however, to be asynchronous while capturing performance data. The following class shows a simple PerfLog entity.

public class PerfLog : TableEntity
    {
        private string operationName;
 
        public string OperationName 
        { 
            get 
            {
                return operationName;
            } 
            set 
            {
                operationName = value;
                this.RowKey = string.Format("{0}_{1}", value, this.RowKey);
            } 
        }
    
        public int TenantId { get; set; }
        public string ParamValuesPassed { get; set; }
        public int OperationExecutionDuration { get; set; }
        public string ResultInfo { get; set; }
        public int RecordsReturned { get; set; }
    } 

Second, you can use the Windows Azure Performance Diagnostics part of Windows Azure Diagnostics (WAD) to create and use performance counters in your application.

The WAD framework is built using the Event Tracing for Windows (ETW) framework and the performance counter logs are stored in the WADPerformanceCountersTable storage table.

It is possible to create a custom diagnostic plan for each role using Visual Studio. The following figure illustrates the performance configuration.

Service Metering

If you are building a multi-tenant SaaS-based application, Service Metering is important to gain an understanding of usage and bill customers accordingly. The recently released Cloud Design Patterns guidance by the Patterns and Practices team elaborates on the Service Metering Guidance.

Please note that there is no default mechanism in Azure to perform metering by tenant. The architecture needs to take care of tenant-based metering. An early example of this was released in the Cloud Ninja Metering Block and you can explore the source to understand a tenant metering implementation and apply it in your application. The source is available at http://cnmb.codeplex.com/SourceControl/latest.

 

The Azure Management Portal provides you usage details on each resource and is a great way to find out the resource allocation on roles. The following figure illustrates:

How To : A library to create .mht files (available at request)

There are a number of ways to do this, including hosting Word or Excel on the Web Server and dealing with COM Interop issues, or purchasing third – party MIME encoding libraries, some of which sell for $250.00 or more. But, there is no native .NET solution. So, being the curious soul that I am, I decided to investigate a bit and see what I could come up with. Internet Explorer offers a File / Save As option to save a web page as “Web Archive, single file (*.mht)”.

Image

What this does is create an RFC – compliant Multipart MIME Message. Resources such as images are serialized to their Base64 inline encoding representations and each resource is demarcated with the standard multipart MIME header – breaks. Internet Explorer, Word, Excel and most newsreader programs all understand this format. The format, if saved with the file extension “.eml”, will come up as a web page inside Outlook Express; if saved with “.mht”, it will come up in Internet Explorer when the file is double-clicked out of Windows Explorer, and — what many do not know — if saved with a “*.doc” extension, it will load in MS Word, each with all the images intact, and in the case of the EML and MHT formats, with all of the hyperlinks fully-functioning. The primary advantage of the format is, of course, that all the resources can be consolidated into a single file,. making distribution and archiving much easier — including database storage in an NVarchar or NText type field.

 

System.Web.Mail, which .NET provides as a convenient wrapper around the CDO for Windows COM library, offers only a subset of the functionality exposed by the CDO library, and multipart MIME encoding is not a part of that functionality. However, through the wonders of COM Interop, we can create our own COM reference to CDO in the Visual Studio IDE, allowing it to generate a Runtime Callable Wrapper, and help ourselves to the entire rich set of functionality of CDO as we see fit.

 

One method in the CDO library that immediately came to my notice was the CreateMHTMLBody method. That’s MHTMLBody, meaning “Multipurpose Internet Mail Extension HTML (MHTML) Body”. Well!– when I saw that, my eyes lit up like the LED’s on a 32 – way Unisys box! This is a method on the CDO Message class; the method accepts a URI to the requested resource, along with some enumerations, and creates a MultiPart MIME – encoded email message out of the requested URI responses — including images, css and script — in one fell swoop.

 

“Ah”, you say, “How convenient”! Yes, and not only that, but we also get a free “multipart COM Interop Baggage” reference to the ADODB.Stream object – and by simply calling the GetStream method on the Message Class, and then using the Stream’s SaveToFile method, we can grab any resource including images, javascript, css and everything else (except video) and save it to a single MHT Web Archive file just as if we chose the “Save As” option out of Internet Explorer.

 

If we choose not to save the file, but instead want to get back the stream contents, no problem. We just call Stream.ReadText(Stream.Size) and it returns a string containing the entire MHT encoded content. At that point we can do whatever we want with it – set a content – header and Response .Write the content to the browser, for instance — or whatever.

 

For example, when we get back our “MHT” string, we can write the following code:

Response.ContentType=”application/msword”;
Response.AddHeader( “Content-Disposition”, “attachment;filename=NAME.doc”);
Response.Write(myDataString);

 

— and the browser will dutifully offer to save the file as a Word Document. It will still be Multipart MIME encoded, but the .doc extension on the filename allows Word to load it, and Word is smart enough to be able to parse and render the file very nicely. “Ah”, you are saying, “this is nice, and so is the price!”. Yup!

And, if you are serving this MIME-encoded file from out of your database, for example, and you would like it to be able to be displayed in the browser, just change the “NAME.doc” to “NAME.MHT”, and don’t set a content-type header. Internet Explorer will prompt the user to either save or open the file. If they choose “open”, it will be saved to the IE Temporary files and open up in the browser just as if they had loaded it from their local file system.

 

So, to answer a couple of questions that came up recently, yes — you can use this method to MHTML – encode any web page – even one that is dynamically generated as with a report — provided it has a URL, and save the MIME-encoded content as a string in either an NVarchar or NText column in your database. You can then bring this string back out and send it to the browser, images,css, javascript and all.

Now here is the code for a small, very basic “Converter” class I’ve written to take advantage of the two scenarios specified above. Bear in mind, there is much more available in CDO, but I leave this wondrous trail of ecstatic discovery to your whims of fancy:

using System;
using System.Web;
using CDO;
using ADODB;
using System.Text;
namespace PAB.Web.Utils
{
 public class MIMEConverter
 {
  //private ctor as our methods are all static here
  private MIMEConverter()
  {
   
  }   
  public static bool SaveWebPageToMHTFile( string url, string filePath)
  {
   bool result=false;
   CDO.Message  msg = new CDO.MessageClass(); 
   ADODB.Stream  stm=null ;
   try
   {
    msg.MimeFormatted =true;   
    msg.CreateMHTMLBody(url,CDO.CdoMHTMLFlags.cdoSuppressNone, "" ,"" );
stm = msg.GetStream(); stm.SaveToFile(filePath,ADODB.SaveOptionsEnum.adSaveCreateOverWrite); msg=null; stm.Close(); result=true; } catch {throw;} finally { //cleanup here } return result; } public static string ConvertWebPageToMHTString( string url ) { string data = String.Empty; CDO.Message msg = new CDO.MessageClass(); ADODB.Stream stm=null; try { msg.MimeFormatted =true; msg.CreateMHTMLBody(url,CDO.CdoMHTMLFlags.cdoSuppressNone,
"", "" );
stm = msg.GetStream(); data= stm.ReadText(stm.Size); } catch { throw; } finally { //cleanup here } return data; } } }

 

NOTE: When using this type of COM Interop from an ASP.NET web page, it is important to remember that you must set the AspCompat=”true” directive in the Page declaration or you will be very disappointed at the results! This forces the ASP.NET page to run in STA threading model which permits “classic ASP” style COM calls. There is, of course, a significant performance penalty incurred, but realistically, this type of operation would only be performed upon user request and not on every page request.

<

p align=”left”>The downloadable zip file below contains the entire class library and a web solution that will exercise both methods when you fill in a valid URI with protocol, and a valid file path and filename for saving on the server. Unzip this to a folder that you have named “ConvertToMHT” and then mark the folder as an IIS Application so that your request such as “http://localhost/ConvertToMHT/WebForm1.aspx&#8221; will function correctly. You can then load the Solution file and it should work “out of the box”. And, don’t forget – if you have an ASP.NET web application that wants to write a file to the file system on the server, it must be running under an identity that has been granted this permission.

How To : Make use of Application Insights (with a great VS extension available freely)

You can now add Application Insights telemetry right from Visual Studio to new or existing projects in 2 clicks or less.

This release of Application Insights Tools for Visual Studio is in PREVIEW; see Known Issues below.

Get Started

To get started with a new project, simply create a Web, Windows Store 8.1, or Windows Phone 8.0 project. In the New Project dialog, make sure that Add Application Insights to Project is checked.

To get started with an existing project, right-click on a Web, Windows Store 8.1, or Windows Phone 8.0 project in Solution Explorer and choose Add Application Insights Telemetry to Project.

That’s it! Then run your Web application locally (or deploy your application), and after 10-15 minutes, telemetry data will automatically start appearing in the Application Insights Portal in the Usage tab.

Additional project types are supported with partial automation (see Known Issues below)

Q & A

Q: I don’t see the Add Application Insights Telemetry to Project command.

A: The type of app you’re creating is not supported yet. Instead, add your application at the Application Insights portal.

Q: Oops. I closed the Application Insights browser. How do I get it back?

A: In Solution Explorer, in the context menu of the project, choose Open Application Insights Portal…

Q: What other telemetry can I log from my app?

A: You can log events and metrics. Take a look at Improve your application from live usage data

Known Issues

This release of Application Insights Tools for Visual Studio is in PREVIEW. There are a number of known issues, including:

  • If you are upgrading from version 0.6.56.3 of the Application Insights Telemetry SDK for Services using the Application Insights Tools for Visual Studio, you will need to manually copy any custom settings from Monitoring.CollectionPlan.config to ApplicationInsights.config file.
  • For Windows Store 8.1 C++ projects instrumented with Application Insights, updates for the “Application Insights Telemetry SDK for Windows Store Apps” from nuget.org do not show up in the “Manage NuGet Packages” dialog. You can install an updated nuget package from the “Online” section of the “Manage NuGet Packages” dialog. Or, you can execute this command in the Package Manager Console: update-package Microsoft.ApplicationInsights.Telemetry.WindowsStore

DRY Architecture, Layered Architecture, Domain Driven Design and a Framework to build great Single Web Pages – BiolerPlate Part 1

DRY – Don’t Repeat Yourself! is one of the main ideas of a good developer while developing a software. We’re trying to implement it from simple methods to classes and modules. What about developing a new web based application? We, software developers, have similar needs when developing enterprise web applications.

Enterprise web applications need login pages, user/role management infrastructure, user/application setting management, localization and so on. Also, a high quality and large scale software implements best practices such as Layered Architecture, Domain Driven Design (DDD), Dependency Injection (DI). Also, we use tools for Object-Releational Mapping (ORM), Database Migrations, Logging… etc. When it comes to the User Interface (UI), it’s not much different.

Starting a new enterprise web application is a hard work. Since all applications need some common tasks, we’re repeating ourselves. Many companies are developing their own Application Frameworks or Libraries for such common tasks to do not re-develop same things. Others are copying some parts of existing applications and preparing a start point for their new application. First approach is pretty good if your company is big enough and has time to develop such a framework.

As a software architect, I also developed such a framework im my company. But, there is some point it feels me bad: Many company repeats same tasks. What if we can share more, repeat less? What if DRY principle is implemented universally instead of per project or per company? It sounds utopian, but I think there may be a starting point for that!

What is ASP.NET Boilerplate?

http://www.aspnetboilerplate.com/

ASP.NET Boilerplate [1] is a starting point for new modern web applications using best practices and most popular tools. It’s aimed to be a solid model, a general-purpose application framework and a project template. What it does?

  • Server side
    • Based on latest ASP.NET MVC and Web API.
    • Implements Domain Driven Design (Entities, Repositories, Domain Services, Application Services, DTOs, Unif Of Work… and so on)
    • Implements Layered Architecture (Domain, Application, Presentation and Infrastructure Layers).
    • Provides an infrastructure to develop reusable and composable modules for large projects.
    • Uses most popular frameworks/libraries as (probably) you’re already using.
    • Provides an infrastructure and make it easy to use Dependency Injection (uses Castle Windsor as DI container).
    • Provides a strict model and base classes to use Object-Releational Mapping easily (uses NHibernate, can work with many DBMS).
    • Implements database migrations (uses FluentMigrator).
    • Includes a simple and flexible localization system.
    • Includes an EventBus for server-side global domain events.
    • Manages exception handling and validation.
    • Creates dynamic Web API layer for application services.
    • Provides base and helper classes to implement some common tasks.
    • Uses convention over configuration principle.
  • Client side
    • Provides two project templates. One for Single-Page Applications using Durandaljs, other one is a Multi-Page Application. Both templates uses Twitter Bootstrap.
    • Most used libraries are included by default: Knockout.js, Require.js, jQuery and some useful plug-ins.
    • Creates dynamic javascript proxies to call application services (using dynamic Web API layer) easily.
    • Includes unique APIs for some sommon tasks: showing alerts & notifications, blocking UI, making AJAX requests.

Beside these common infrastructure, the “Core Module” is being developed. It will provide a role and permission based authorization system (implementing ASP.NET Identity Framework), a setting systems and so on.

What ASP.NET Boilerplate is not?

ASP.NET Boilerplate provides an application development model with best practices. It has base classes, interfaces and tools that makes easy to build maintainable large-scale applications. But..

  • It’s not one of RAD (Rapid Application Development) tools those provide infrastructure for building applications without coding. Instead, it provides an infrastructure to code in best practices.
  • It’s not a code generation tool. While it has several features those build dynamic code in run-time, it does not generate codes.
  • It’s not a all-in-one framework. Instead, it uses well known tools/libraries for specific tasks (like NHibernate for O/RM, Log4Net for logging, Castle Windsor as DI container).

Getting started

In this article, I’ll show how to deleveop a Single-Page and Responsive Web Application using ASP.NET Boilerplate (I’ll call it as ABP from now). This sample application is named as “Simple Task System” and it consists of two pages: one for list of tasks, other one is to add new tasks. A Task can be related to a person, can be completed. The application is localized in two languages. Screenshot of Task List in the application is shown below:

A screenshot of 'Simple Task System'

Creating empty web application from template

ABP provides two templates to start a new project (Even if you can manually create your project and get ABP packages from nuget, template way is much more easy). Go to www.aspnetboilerplate.com/Templates to create your application from one of twotemplates (one for SPA (Single-Page Application), one for MPA (classic, Multi-Page Application) projects):

Creating template from ABP web site

I named my project as SimpleTaskSystem and created a SPA project. It downloaded project as a zip file. When I open the zip file, I see a solution is ready that contains assemblies (projects) for each layer of Domain Driven Design:

Project files

Created project’s runtime is .NET Framework 4.5.1, I advice to open with Visual Studio 2013. The only prerequise to be able to run the project is to create a database. SPA template assumes that you’re using SQL Server 2008 or later. But you can change it easily to another DBMS.

See the connection string in web.config file of the web project:

<add name="MainDb" connectionString="Server=localhost; Database=SimpleTaskSystemDb; Trusted_Connection=True;" />

You can change connection string here. I don’t change the database name, so I’m creating an empty database, named SimpleTaskSystemDb, in SQL Server:

Empty database

That’s it, your project is ready to run! Open it in VS2013 and press F5:

First run

Template consists of two pages: One for Home page, other is About page. It’s localized in English and Turkish. And it’s Single-Page Application! Try to navigate between pages, you’ll see that only the contents are changing, navigation menu is fixed, all scripts and styles are loaded only once. And it’s responsive. Try to change size of the browser.

Now, I’ll show how to change the application to a Simple Task System application layer by layer in the coming part 2

Great Agile Develipment Tool – Agile Planner

Great Agile Development project – /http://agileplanner.codeplex.com/

Project Description

This project is to develop an interation planning tool for agile project management.

What’s new?

Release: 1.0.0, run in Visual Studio integrated mode. See how to use for details.

What is Agile Planner?

This tool is for agile project teams, who currently are using sticky notes on the wall. With this tool stories, backlog and iterations are managed in a graphic designer, saved as files within visual studio projects and can be exported to images, reports and etc.

Main features are

  • Stories can be drag and dropped between backlog and iterations
  • Iteration’s capacity calculated automatically base on stories within it
  • Stories are rendered base on customizable status or priority color schema
  • Diagram can be exported to PNG image for printing, documentation and sharing

Here are examples.

Stories rendered based on status
agile-stories-on-status-s.png

Stories rendered base on priority
agile-stories-on-priority-s.png

 

How to use Agile Planner

1. Install
To install Agile Planner,

  • Download the runtime binary zip file from the latest releases
  • Extract all files from the runtime binary zip file
  • Run the windows installer AgilePlanner.msi (requires elevated command prompt under Vista and administrator on XP/2003).

2. Get Started
To start use Agile Planner in Visual Studio 2008 project

  • Start Visual Studio 2008, create new project or load existing project
  • Right click the project name and select menu “Add | New Item …”

add-new-item.PNG

  • Select AgilePlanner
  • Dismiss the security warning if it shows up

You will be presented a designing environment like below.

designer.PNG

  1. Graphical designer for iterations and stories
  2. Toolbox for iterations and stories
  3. Treeview Explorer for iterations and stories
  4. Property window for iterations and stories

3. Plan your project’s iterations

  • Create iterations by dragging iteration tool from toolbox to the graphical designer
  • Create Stories by dragging story tool from toolbox to the backlog and iterations
  • Edit stories’ properties such as name, capacity, priority and status the property window

add-stories.PNG

Notice: the capacity of iteration updated automatically after dragged stories between iterations and after updated the stories’ capacity property, so that you can balance the work load between iterations.

4. Render the diagram
The stories can be colored based on either their status or priority. To switch between these two options, right click the diagram and select menu “Color on Status” or “Color on Priority”. The color schema are customizable as the property of the project.

render-in-priority.PNG

5. Export
The rendered diagram can be exported to png file by right clicking on the diagram and select menu “Export to image”.

exported.png

How to use it?

See how to use

New version of Prism released – Get it now free!!

Prism helps developers who want to create a Windows Store business app using C#, XAML, the Windows Runtime, and development patterns such as Model-View-ViewModel (MVVM) and event aggregation.

Prism includes two libraries, a reference implementation called AdventureWorks Shopper, Quickstarts and associated documentation.

PrimForWindowsRuntime[1]

 

This is an update from the version released in May for Windows 8.

The guidance demonstrates:

  • How to implement pages, touch, navigation, settings, suspend/resume, search, tiles, and tile notifications.
  • How to implement the Model-View-ViewModel (MVVM) pattern.
  • How to validate user input for correctness.
  • How to manage application data.
  • How to test your app and tune its performance.

What’s new in the Windows 8.1 version

Documentation

  • Created a developer task topic to help developers learn how to complete key Windows Store dev tasks for validation, creating pages, navigation, touch, tiles, search, performance, testing, deployment, extended splash screen, incremental loading, Model-View-ViewModel (MVVM),  loosely coupled communication, and using the Prism libraries.
  • Added the AdventureWorks Shopper logical architecture to help you understand what code you need to write for a Windows Store app vs the code the Prism library provides.
  • Updated PDF for Windows 8.1.
  • Provided release notes on CodePlex including a change log and late breaking news.

AdventureWorks Shopper Reference Implementation

  • Created AutoRotatingGridView grid control to create a fluid page layout that responds to user requests to change the pages size and orientation
  • Demonstrated using the IncrementalUpdateBehavior Blend Behavior for large data to improve user perceived performance
  • Cleaned up styles
  • Used Flyout/MenuFlyout instead of popup
  • Changed FlyoutViews to use SettingsFlyout
  • Used out of the box control for Watermark
  • Used Blend Behaviors
  • Used SearchBox & new search APIs
  • Updated top/bottom app bars to use CommandBars and Action Buttons
  • Used Windows.Web.Http.HttpClient instead of System.Net.Http.HttpClient

Prism for Windows Runtime

  • Updated VisualStateAwarePage to detect page size and orientation
  • Removed FlyoutService and FlyoutView
  • Removed SearchPaneService and SearchQueryArguments. Used new SearchBox control instead.
  • Added support for an extended splashscreen

Quickstarts

  • Created Incremental Loading Quickstart to demonstrate how to improve end user perceived performance for a large grid by handling the ContainerContentChanging event, or by using the IncrementalUpdateBehavior Blend Behavior vs traditional data binding.
  • Created Extended Splash Screen Quickstart to demonstrate how to use the Prism library to create an extended splash screen.

Where to get it?

  • Documentation on the Windows Development Center.
  • PDF version of the documentation will be available later this month.
  • Source code for AdventureWorks Shopper reference implementation and the Prism libraries.
  • Source code for the associated quickstarts.
  • Via NuGet – use NuGet package Manager in Visual Studio and search online for Prism.StoreApps and Prism.PubSubEvents

If you need the source code for the AdventureWorks Shopper reference implementation and the Prism library that runs on Windows 8 we moved it to our CodePlex site.

Where to start?

  • Review the AdventureWorks reference implementation. After you download the code, see Getting started with Prism library for instructions on how to compile and run the reference implementation, as well as understand the Visual Studio solution structure.
  • Review Quickstarts. The Quickstart samples focus on specific tasks such as validation, event aggregation, bootstrapping an MVVM app, and adding an extended splash screen to your app.
  • Create an app. If you want to create your own app using Prism see Using Prism for the Windows Runtime.
  • Explore developer tasks. Learn how the Prism team implemented many of the tasks required to create a Windows Store app.
  • Review the documentation. The associated documentation outlines the key decisions and lessons learned to create a Windows Store business app.
  • Review the release notes. The release notes provide late breaking updates and a more detailed log of the changes in this release.

 

What code do I write and what does Prism library provide?

We included the AdventureWorks logical architecture in the documentation to help you understand what code is provided by the Prism library and what code you will need to create for your Windows Store business app.

 

Logical architecture of a Windows Store business app that uses Prism

Community

Prism for the Windows Runtime has a  community site you can post questions, provide feedback, connect with other users to share ideas, and find additional content such as extensions and training material. Community members can also help Microsoft plan and test future releases of Prism for the Windows Runtime. For more info see patterns & practices: Prism for the Windows Runtime.

So go download the code and get started creating your Windows Store app with Prism. We want to hear about your successes and challenges on our CodePlex site. What else do we need to add to the library and associated documentation? Many of the additions to this release came from user feedback from the CodePlex site.

How to : Use JQuery and JSON in MVC 5 for Autocomplete

Image

Imagine that you want to create edit view for Company entity which has two properties: Name (type string) and Boss (type Person). You want both properties to be editable. For Company.Name simple text input is enough but for Company.Boss you want to use jQuery UI Autocomplete widget. This widget has to meet following requirements:

  • suggestions should appear when user starts typing person’s last name or presses down arrow key;
  • identifier of person selected as boss should be sent to the server;
  • items in the list should provide additional information (first name and date of birth);
  • user has to select one of the suggested items (arbitrary text is not acceptable);
  • the boss property should be validated (with validation message and style set for appropriate input field).

Above requirements appear quite often in web applications. I’ve seen many over-complicated ways in which they were implemented. I want to show you how to do it quickly and cleanly… The assumption is that you have basic knowledge about jQuery UI Autocomplete and ASP.NET MVC. In this post I will show only the code which is related to autocomplete functionality but you can download full demo project here. It’s ASP.NET MVC 5/Entity Framework 6/jQuery UI 1.10.4 project created in Visual Studio 2013 Express for Web and tested in Chrome 34, FF 28 and IE 11 (in 11 and 8 mode).

So here are our domain classes:

public class Company
{
    public int Id { get; set; } 

    [Required]
    public string Name { get; set; }

    [Required]
    public Person Boss { get; set; }
}
public class Person
{
    public int Id { get; set; }

    [Required]
    [DisplayName("First Name")]
    public string FirstName { get; set; }
    
    [Required]
    [DisplayName("Last Name")]
    public string LastName { get; set; }

    [Required]
    [DisplayName("Date of Birth")]
    public DateTime DateOfBirth { get; set; }

    public override string ToString()
    {
        return string.Format("{0}, {1} ({2})", LastName, FirstName, DateOfBirth.ToShortDateString());
    }
}

Nothing fancy there, few properties with standard attributes for validation and good looking display. Person class has ToString override – the text from this method will be used in autocomplete suggestions list.

Edit view for Company is based on this view model:

public class CompanyEditViewModel
{    
    public int Id { get; set; }

    [Required]
    public string Name { get; set; }

    [Required]
    public int BossId { get; set; }

    [Required(ErrorMessage="Please select the boss")]
    [DisplayName("Boss")]
    public string BossLastName { get; set; }
}

Notice that there are two properties for Boss related data.

Below is the part of edit view that is responsible for displaying input field with jQuery UI Autocomplete widget for Boss property:

<div class="form-group">
    @Html.LabelFor(model => model.BossLastName, new { @class = "control-label col-md-2" })
    <div class="col-md-10">
        @Html.TextBoxFor(Model => Model.BossLastName, new { @class = "autocomplete-with-hidden", data_url = Url.Action("GetListForAutocomplete", "Person") })
        @Html.HiddenFor(Model => Model.BossId)
        @Html.ValidationMessageFor(model => model.BossLastName)
    </div>
</div>

form-group and col-md-10 classes belong to Bootstrap framework which is used in MVC 5 web project template – don’t bother with them. BossLastName property is used for label, visible input field and validation message. There’s a hidden input field which stores the identifier of selected boss (Person entity). @Html.TextBoxFor helper which is responsible for rendering visible input field defines a class and a data attribute. autocomplete-with-hidden class marks inputs that should obtain the widget. data-url attribute value is used to inform about the address of action method that provides data for autocomplete. Using Url.Action is better than hardcoding such address in JavaScript file because helper takes into account routing rules which might change.

This is HTML markup that is produced by above Razor code:

<div class="form-group">
    <label class="control-label col-md-2" for="BossLastName">Boss</label>
    <div class="col-md-10">
        <span class="ui-helper-hidden-accessible" role="status" aria-live="polite"></span>
        <input name="BossLastName" class="autocomplete-with-hidden ui-autocomplete-input" id="BossLastName" type="text" value="Kowalski" 
         data-val-required="Please select the boss" data-val="true" data-url="/Person/GetListForAutocomplete" autocomplete="off">
        <input name="BossId" id="BossId" type="hidden" value="4" data-val-required="The BossId field is required." data-val-number="The field BossId must be a number." data-val="true">
        <span class="field-validation-valid" data-valmsg-replace="true" data-valmsg-for="BossLastName"></span>
    </div>
</div>

This is JavaScript code responsible for installing jQuery UI Autocomplete widget:

$(function () {
    $('.autocomplete-with-hidden').autocomplete({
        minLength: 0,
        source: function (request, response) {
            var url = $(this.element).data('url');
   
            $.getJSON(url, { term: request.term }, function (data) {
                response(data);
            })
        },
        select: function (event, ui) {
            $(event.target).next('input[type=hidden]').val(ui.item.id);
        },
        change: function(event, ui) {
            if (!ui.item) {
                $(event.target).val('').next('input[type=hidden]').val('');
            }
        }
    });
})

Widget’s source option is set to a function. This function pulls data from the server by $.getJSON call. URL is extracted from data-url attribute. If you want to control caching or provide error handling you may want to switch to $.ajax function. The purpose of change event handler is to ensure that values for BossId and BossLastName are set only if user selected an item from suggestions list.

This is the action method that provides data for autocomplete:

public JsonResult GetListForAutocomplete(string term)
{               
    Person[] matching = string.IsNullOrWhiteSpace(term) ?
        db.Persons.ToArray() :
        db.Persons.Where(p => p.LastName.ToUpper().StartsWith(term.ToUpper())).ToArray();

    return Json(matching.Select(m => new { id = m.Id, value = m.LastName, label = m.ToString() }), JsonRequestBehavior.AllowGet);
}

value and label are standard properties expected by the widget. label determines the text which is shown in suggestion list, value designate what data is presented in the input filed on which the widget is installed. id is custom property for indicating which Person entity was selected. It is used in select event handler (notice the reference to ui.item.id): Selected ui.item.id is set as a value of hidden input field – this way it will be sent in HTTP request when user decides to save Company data.

Finally this is the controller method responsible for saving Company data:

public ActionResult Edit([Bind(Include="Id,Name,BossId,BossLastName")] CompanyEditViewModel companyEdit)
{
    if (ModelState.IsValid)
    {
        Company company = db.Companies.Find(companyEdit.Id);
        if (company == null)
        {
            return HttpNotFound();
        }

        company.Name = companyEdit.Name;

        Person boss = db.Persons.Find(companyEdit.BossId);
        company.Boss = boss;
        
        db.Entry(company).State = EntityState.Modified;
        db.SaveChanges();
        return RedirectToAction("Index");
    }
    return View(companyEdit);
}

Pretty standard stuff. If you’ve ever used Entity Framework above method should be clear to you. If it’s not, don’t worry. For the purpose of this post the important thing to notice is that we can use companyEdit.BossId because it was properly filled by model binder thanks to our hidden input field.

That’s it, all requirements are met! Easy, huh?

You may be wondering why I want to use jQuery UI widget in Visual Studio 2013 project which by default uses Twitter Bootstrap. It’s true that Bootstrap has some widgets and plugins but after a bit of experimentation I’ve found that for some more complicated scenarios jQ UI does a better job. The set of controls is simply more mature…

FREE Microsoft Dynamics CRM 2011 List Component for Microsoft SharePoint Server 2010

 

 

CRM2011 – SharePoint 2010 Integration? Glue CRM 2011 & Share Point 2010 together? Make CRM 2011 and Share Point 2010 converse? I wasn’t sure what to call this exactly. “Hooking together” works for me!

Now that we have a CRM 2011 instance and a Share Point site working, let’s get them connected up! Go to this website and download Microsoft Dynamics CRM 2011 List Component for Microsoft SharePoint Server 2010:

Accept the License Terms.

Extract the files to a folder (I chose C:\CRM List).

You will get a prompt “The Installation is complete.” Click OK.

Let’s go over to the Share Point Central Administration Server to install the list component we just extracted. Connect to http://localhost:48835/ (your port might be different, be aware of this). Click Manage web applications.

Click the new Share Point site, and then “General Settings” (the blue cogs).

Scroll down to Browser File Handling and choose Permissive, Click OK.

Let’s head back over to our new Share Point Site. Click Site Actions up top left, and then “Site Settings”.

Under Galleries click “Solutions”.


Click the Word “Solutions” up top (you have to click the word “Solutions”, even though it looks selected), and then click “Upload Solution”.

Select the .wsp component that we extracted wayyy back at the top of this. I used C:CRM List as my extract folder. Click OK.

You’ll get prompted at this point, I couldn’t active the control on this screen (but it still needs to be done). We need to make sure some services are running to activate the solution. Click Close.

Head back to the Share Point Central Administration. http://localhost:48835. Found at

Click System Settings –> Manage Services on this server

Click Start beside “Share Point Foundation Sandboxed Code Service”. I also started “Microsoft SharePoint Foundation Subscription Settings Service (by accident)” so that’s why that ones started.

Now to head back to our Share Point site http://localhost:39083/

Under Galleries click “Solutions”.

Click Solutions again, select crmlistcomponent, and the click “Activate” up top. Activate is now un-greyed out! Click Activate!

The solution has now been activated! Hurray!

There seems to be some confusion whether or not you need to run a power shell script to enable Activation of Share Point 2010 solutions (AllowHtcExtn). According to what I’ve read, you would need to run this if Share Point 2010 is running on a domain controller. I didn’t have to do this (and we’re on a domain controller), and I’ve yet to run into a problem with .htc stuff. Even in the Microsoft Dynamics CRM 2011 Readme it says:
“If you are using Microsoft SharePoint Server 2010 (On-Premises), you must add .htc extensions to the list of allowed file types:
a. Copy the AllowHtcExtn.ps1 script file to the server that is running Microsoft SharePoint Server 2010.
b. In the Windows PowerShell window or in the SharePoint Management Console, run the command: AllowHtcExtn.ps1 .
Example: AllowHtcExtn.ps1 http://servername%E2%80%9D

Some people say the script works for them , and some say that using just the blog method (what we did) works
The sharepoint configuration is complete at this point. You probably want to take a snapshot, name it “After Sharepoint Configuration”. Let’s head over to our CRM server (localhost:85).

In CRM Click Settings –> Document Management –> Document Management Settings

Select the entities that you want to have documents enabled on. This will create a “Documents” area when you open an instance of the entity. I’ll just leave the defaults for now. At the bottom punch in your Share Point site that you’ve created and click Next. This is the Share Point server we installed the list component on. You’re not allowed to use localhost:port, just use the computer name:port like below.

Don’t select the box, otherwise it will relate the files to those entities. Without checking the box you will end up with something like Site/EntityName/Record Name (which is what I want, especially if you’re using custom entities). Click Next.

If “Libraries are being created in the path”, click Next.

Everything should “Succeed”, Click Finish.

Let’s test this bad boy out now.

Create a new account called “Test”.

Click Save! Click “Documents” on the left side. You’ll get a prompt saying that the folder (Test) is being created under “Account”. Click OK.

Click Add.

Now you’ll probably get these errors! /crmgrid/scripts/DialogContainer.js and 403 FORBIDDEN! Depressing. The only real information on this error was here: . It wasn’t very clear, but I stumbled through it. It seems that CRM 2011 doesn’t enjoy being called localhost. Let’s fix these up.

The fix for this was to run inetmgr –> Click Microsoft Dynamics CRM –> click Stop

Click “Bindings…” on the right side. Click “Edit” on the items that show “localhost” and change it to my machine name: “win-b80icqrvluf”. This is so it has a a “real” name to connect to.

Before:

After:

Now click “Start” on the right side.

Head back over to the CRM (http://win-b80icqrvluf:85/CRMTest/main.aspx) make sure to use the host name, as it might give you the error if you use localhost. Open your Test Account again.

Click Documents –> Add, you should now see this popup (it can take a while to load for the first time on the VM). If you continue to get the error, stop both CRM 2011 and Share Point 2010 servers and restart them. If that doesn’t work, try restarting the whole server.

Pick a file, and click OK.

The file should be uploaded to Share Point now.

Head over to Share Point at http://win-b80icqrvluf:39083 and click “All Site Content” or “Libraries”.

Click Account.

You can see that CRM has created a folder “Test” (for our record). It creates 1 folder per record. Click it to see the files associated to that record!!

The files associated to the record “Test” in Accounts.

Share Point and CRM have combined into a super awesome force of doom. But we’re still missing 1 core piece of functionality (due to not picking a port when we installed CRM).

 

 

10 Must-Have Visual Studio Productivity Add-Ins I use everyday and recommend to every . Net Developer

Visual Studio provides a rich extensibility model that developers at Microsoft and in the community have taken advantage of to provide a host of quality add-ins. Some add-ins contribute significant how-did-I-live-without-this functionality, while others just help you automate that small redundant task you constantly find yourself performing.
ClickHandler.ashx
10 Must-Have Add-Ins

TestDriven.NET
GhostDoc
Paster
CodeKeep
PInvoke.NET
VSWindowManager PowerToy
WSContractFirst
VSMouseBindings
CopySourceAsHTML
Cache Visualizer

In this article, I introduce you to some of the best Visual Studio add-ins available today that can be downloaded for free. I walk through using each of the add-ins, but because I am covering so many I only have room to introduce you to the basic functionality.
Each of these add-ins works with Visual Studio .NET 2003 and most of them already have versions available for Visual Studio 2005. If a Visual Studio 2005 version is not available as of the time of this writing, it should be shortly.

 

TestDriven.NET
Test-driven development is the practice of writing unit tests before you write code, and then writing the code to make those tests pass. By writing tests before you write code, you identify the exact behavior your code should exhibit and, as a bonus, at the end you have 100 percent test coverage, which makes extensive refactoring possible.
NUnit gives you the ability to write unit tests using a simple syntax and then execute those tests one by one or altogether against your app. If you are using Visual Studio Team System, you have unit testing functionality built into the Visual Studio IDE. Before the Visual Studio Team System, there was TestDriven.NET, an add-in that integrates NUnit directly into the Visual Studio IDE. Using a non-Team System version of Visual Studio 2005 or Visual Studio .NET 2003, is, in my opinion, still the best solution available.
TestDriven.NET adds unit testing functionality directly to the Visual Studio IDE. Instead of writing a unit test, switching over to the NUnit GUI tool, running the test, then switching back to the IDE to code, and so on, you can do it all right in the IDE.

 

Figure 1 New Testing Options from TestDriven.NET 
After installing TestDriven.NET you will find a number of new menu items on the right-click context menu as shown in Figure 1. You can right-click directly on a unit test and run it. The results will be displayed in the output window as shown in Figure 2.

 

Figure 2 Output of a Unit Test 
While executing unit tests in the IDE is invaluable by itself, perhaps the best feature is that you can also quickly launch into the debugger by right-clicking on a test and selecting Test With | Debugger. This will launch the debugger and then execute your unit tests, hitting any breakpoints you have set in those tests.
In fact, it doesn’t even have to be a unit test for TestDriven.NET to execute it. You could just as easily test any public method that returns void. This means that if you are testing an old app and need to walk through some code, you can write a quick test and execute it right away.
TestDriven.NET is an essential add-in if you work with unit tests or practice test-driven development. (If you don’t already, you should seriously consider it.) TestDriven.NET was written by Jamie Cansdale and can be downloaded from http://www.testdriven.net.

 

GhostDoc
XML comments are invaluable tools when documenting your application. Using XML comments, you can mark up your code and then, using a tool like nDoc, you can generate help files or MSDN-like Web documentation based on those comments. The only problem with XML documentation is the time it takes to write it you often end up writing similar statements over and over again. The goal of GhostDoc is to automate the tedious parts of writing XML comments by looking at the name of your class or method, as well as any parameters, and making an educated guess as to how the documentation should appear based on recommended naming conventions. This is not a replacement for writing thorough documentation of your business rules and providing examples, but it will automate the mindless part of your documentation generation.
For instance consider the method shown here:

private void SavePerson(Person person) { }

After installing GhostDoc, you can right-click on the method declaration and choose Document this. The following comments will then be added to your document:

///  /// Saves the person. ///  ///Person. private void SavePerson(Person person) { }

As you can see, GhostDoc has automatically generated a summary based on how the method was named and has also populated the parameter comments. Don’t stop hereyou should add additional comments stating where the person is being saved to or perhaps give an example of creating and saving a person. Here is my comment after adding some additional information by hand:

///  /// Saves a person using the configured persistence provider. ///  ///The Person to be saved private void SavePerson(Person person) { }
Adding these extra comments is much easier since the basic, redundant portion is automatically generated by GhostDoc. GhostDoc also includes options that allow you to modify existing rules and add additional rules that determine what kind of comments should be generated.
GhostDoc was written by Roland Weigelt and can be downloaded from http://www.roland-weigelt.de/ghostdoc.

 

Smart Paster
Strings play a large role in most applications, whether they are comments being used to describe the behavior of the system, messages being sent to the user, or SQL statements that will be executed. One of the frustrating parts of working with strings is that they never seem to paste correctly into the IDE. When you are pasting comments, the strings might be too long or not aligned correctly, leaving you to spend time inserting line breaks, comment characters, and tabbing. When working with strings that will actually be concatenated, you have to do even more work, usually separating the parts of the string and inserting concatenation symbols or using a string builder.
The Smart Paster add-in helps to limit some of this by providing a number of commands on the right-click menu that let you paste a string from the clipboard into Visual Studio using a certain format. After installing Smart Paster, you will see the new paste options available on the right-click menu (see Figure 3).

 

Figure 3 String Pasting Options from Smart Paster 
For instance, you might have the following string detailing some of your business logic:

To update a person record, a user must be a member of the customer service group or the manager group. After the person has been updated, a letter needs to be generated to notify the customer of the information change.

You can copy and paste this into Visual Studio using the Paste As | Comment option, and you would get the following:

//To update a person record a user must be a member of the customer //service group or the manager group. After the person has been updated //a letter needs to be generated to notify the customer of the //information change.
The correct comment characters and carriage returns are automatically inserted (you can configure at what length to insert a character return). If you were inserting this text without the help of Smart Paster, it would paste as one long line, forcing you to manually add all the line breaks and comment characters. As another example, let’s say you have the following error message that you need to insert values into at run time:

You do not have the correct permissions to perform <insert action>. You must be a member of the <insert group> to perform this action.

Using the Paste As | StringBuilder command, you can insert this string as a StringBuilder into Visual Studio. The results would look like this:

StringBuilder stringBuilder = new StringBuilder(134); stringBuilder.AppendFormat( @"You do not have the correct permissions to "); stringBuilder.AppendFormat( @"perform . You must be a member of "); stringBuilder.AppendFormat( @"the  to perform this action.");

It would then simply be a matter of modifying the code to replace the variables sections of the string:

StringBuilder stringBuilder = new StringBuilder(134); stringBuilder.AppendFormat( @"You do not have the correct permissions to "); stringBuilder.AppendFormat( @"perform {0}. You must be a member of ", action); stringBuilder.AppendFormat( @"the {0} to perform this action.", group);

Smart Paster is a time-saving add-in that eliminates a lot of the busy work associated with working with strings in Visual Studio. It was written by Alex Papadimoulis.

 

CodeKeep
Throughout the process of software development, it is common to reuse small snippets of code. Perhaps you reuse an example of how to get an enum value from a string or a starting point on how to implement a certain pattern in your language of choice.
Visual Studio offers some built-in functionality for working with code snippets, but it assumes a couple of things. First, it assumes that you are going to store all of your snippets on your local machine, so if you switch machines or move jobs you have to remember to pack up your snippets and take them with you. Second, these snippets can only be viewed by you. There is no built-in mechanism for sharing snippets between users, groups, or the general public.
This is where CodeKeep comes to the rescue. CodeKeep is a Web application that provides a place for people to create and share snippets of code in any language. The true usefulness of CodeKeep is its Visual Studio add-in, which allows you to search quickly through the CodeKeep database, as well as submit your own snippets.
After installing CodeKeep, you can search the existing code snippets by selecting Tools | CodeKeep | Search, and then using the search screen shown in Figure 4.

 

Figure 4 Searching Code Snippets with CodeKeep 
From this screen you can view your own snippets or search all of the snippets that have been submitted to CodeKeep. When searching for snippets, you see all of the snippets that other users have submitted and marked as public (you can also mark code as private if you want to hide your bad practices). If you find the snippet you are looking for, you can view its details and then quickly copy it to the clipboard to insert into your code.
You can also quickly and easily add your own code snippets to CodeKeep by selecting the code you want to save, right-clicking, and then selecting Send to CodeKeep.This will open a new screen that allows you to wrap some metadata around your snippet, including comments, what language it is written in, and whether it should be private or public for all to see.
Whenever you write a piece of code and you can imagine needing to use it in the future, simply take a moment to submit it; this way, you won’t have to worry about managing your snippets or rewriting them in the future. Since CodeKeep stores all of your snippets on the server, they are centralized so you don’t have to worry about moving your code from system to system or job to job.
CodeKeep was written by Arcware’s Dave Donaldson and is available from http://www.codekeep.net.

 

PInvoke.NET
API calls within the .NET Framework. One of the hard parts of using P/Invoke is determining the method signature you need to use; this can often be an exercise in trial and error. Sending incorrect data types or values to an unmanaged API can often result in memory leaks or other unexpected results.
PInvoke.NET is a wiki that can be used to document the correct P/Invoke signatures to be used when calling unmanaged Win32 APIs. A wiki is a collaborative Web site that anyone can edit, which means there are thousands of signatures, examples, and notes about using P/Invoke. Since the wiki can be edited by anyone, you can contribute as well as make use of the information there.
While the wiki and the information stored there are extremely valuable, what makes them most valuable is the PInvoke.NET Visual Studio add-in. Once you have downloaded and installed the PInvoke.NET add-in, you will be able to search for signatures as well as submit new content from inside Visual Studio. Simply right-click on your code file and you will see two new context items: Insert PInvoke Signatures and Contribute PInvoke Signatures and Types.

Figure 5 Using PInvoke.NET 
When you choose Insert PInvoke Signatures, you’ll see the dialog box shown in Figure 5. Using this simple dialog box, you can search for the function you want to call. Optionally, you can include the module that this function is a part of. Now, a crucial part of all major applications is the ability to make the computer Beep. So I will search for the Beep function and see what shows up. The results can be seen in Figure 6.

Figure 6 Finding the Beep Function in PInvoke.NET 
.NET. The wiki suggests alternative managed APIs, letting you know that there is a new method System.Console.Beep in the .NET Framework 2.0.
There is also a link at the bottom of the dialog box that will take you to the corresponding page on the wiki for the Beep method. In this case, that page includes documentation on the various parameters that can be used with this method as well as some code examples on how to use it.
After selecting the signature you want to insert, click the Insert button and it will be placed into your code document. In this example, the following code would be automatically created for you:

[DllImport("kernel32.dll", SetLastError=true)] [return: MarshalAs(UnmanagedType.Bool)] static extern bool Beep( uint dwFreq, uint dwDuration);

You then simply need to write a call to this method and your computer will be beeping in no time.

The PInvoke.NET wiki and Visual Studio add-in take away a lot of the pain and research time sometimes involved when working with the Win32 API from managed code. The wiki can be accessed at http://www.pinvoke.net, and the add-in can be downloaded from the Helpful Tools link found in the bottom-left corner of the site.

 

VSWindowManager PowerToy
The Visual Studio IDE includes a huge number of different Windows, all of which are useful at different times. If you are like me, you have different window layouts that you like to use at various points in your dev work. When I am writing HTML, I like to hide the toolbox and the task list window. When I am designing forms, I want to display the toolbox and the task list. When I am writing code, I like to hide all the windows except for the task list. Having to constantly open, close, and move windows based on what I am doing can be both frustrating and time consuming.
Visual Studio includes the concept of window layouts. You may have noticed that when you start debugging, the windows will automatically go back to the layout they were in the last time you were debugging. This is because Visual Studio includes a normal and a debugging window layout.
Wouldn’t it be nice if there were additional layouts you could use for when you are coding versus designing? Well, that is exactly what VSWindowManager PowerToy does.
After installing VSWindowManager PowerToy, you will see some new options in the Window menu as shown in Figure 7.

 

Figure 7 VSWindowManager Layout Commands 
The Save Window Layout As menu provides commands that let you save the current layout of your windows. To start using this power toy, set up your windows the way you like them for design and then navigate to the Windows | Save Window Layout As | My Design Layout command. This will save the current layout. Do the same for your favorite coding layout (selecting My Coding Layout), and then for up to three different custom layouts.
VSWindowManager will automatically switch between the design and coding layouts depending on whether you are currently viewing a designer or a code file. You can also use the commands on the Apply Window Layout menu to choose from your currently saved layouts. When you select one of the layouts you have saved, VSWindowManager will automatically hide, show, and rearrange windows so they are in the exact same layout as before.
VSWindowManager PowerToy is very simple, but can save you a lot of time and frustration. VSWindowManager is available from vswindowmanager.codeplex.com/.

 

WSContractFirst
Visual Studio makes creating Web services deceptively easy You simply create an .asmx file, add some code, and you are ready to go. ASP.NET can then create a Web Services Description Language (WSDL) file used to describe behavior and message patterns for your Web service.
There are a couple problems with letting ASP.NET generate this file for you. The main issue is that you are no longer in control of the contract you are creating for your Web service. This is where contract-first development comes to the rescue. Contract-first development, also called contract-driven development, is the practice of writing the contract (the WSDL file) for your Web service before you actually write the Web service itself. By writing your own WSDL file, you have complete control over how your Web service will be seen and used, including the interface and data structures that are exposed.
Writing a WSDL document is not a lot of fun. It’s kind of like writing a legal contract, but using lots of XML. This is where the WSContractFirst add-in comes into play. WSContractFirst makes it easier to write your WSDL file, and will generate client-side and server-side code for you, based on that WSDL file. You get the best of both worlds: control over your contract and the rapid development you are used to from Visual Studio style services development.
The first step to using WSContractFirst is to create your XML schema files. These files will define the message or messages that will be used with your Web services. Visual Studio provides an easy-to-use GUI interface to define your schemasthis is particularly helpful since this is one of the key steps of the Web service development process. Once you have defined your schemas you simply need to right-click on one of them and choose Create WSDL Interface Description. This will launch the Generate WSDL Wizard, the first step of which is shown in Figure 8.

Figure 8 Building a WSDL File with WSContractFirst  
Step 1 collects the basics about your service including its name, namespace, and documentation. Step 2 allows you to specify the .xsd files you want to include in your service. The schema you selected to launch this wizard is included by default. Step 3 allows you to specify the operations of your service. You can name the operation as well as specify whether it is a one-way or request/response operation. Step 4 gives you the opportunity to enter the details for the operations and messages. Step 5 allows you to specify whether a element should be created and whether or not to launch the code generation dialog automatically when this wizard is done. Step 6 lets you specify alternative .xsd paths. Once the wizard is complete, your new WSDL file is added to your project.
Now that you have your WSDL file there are a couple more things WSContractFirst, can do for you. To launch the code generation portion of WSContractFirst, you simply need to right-click on your WSDL file and select Generate Web Service Code. This will launch the code generation dialog box shown in Figure 9.

Figure 9 WSContractFirst Code Generation Options 
You can choose to generate a client-side proxy or a service-side stub, as well as configure some other options about the code and what features it should include. Using these code generation features helps speed up development tremendously.
If you are developing Web services using Visual Studio you should definitely look into WSContractFirst and contract-first development. WSContractFirst was written by Thinktecture’s Christian Weyer.

 

VSMouseBindings
Your mouse probably has five buttons, so why are you only using three of them? The VSMouseBindings power toy provides an easy to use interface that lets you assign each of your mouse buttons to a Visual Studio command.
VSMouseBindings makes extensive use of the command pattern. You can bind mouse buttoms to various commands, such as open a new file, copy the selected text to the clipboard, or just about anything else you can do in Visual Studio. After installing VSMouseBindings you will see a new section in the Options dialog box called VsMouseBindings. The interface can be seen in Figure 10.

Figure 10 VSMouseBindings Options for Visual Studio 
As you can see, you can select a command for each of the main buttons. You probably shouldn’t mess around with the left and right mouse buttons, though, as their normal functionality is pretty useful. The back and forward buttons, however, are begging to be assigned to different commands. If you enjoy having functionality similar to a browser’s back and forward buttons, then you can set the buttons to the Navigate.Backward and Navigate.Forward commands inVisual Studio.
The Use this mouse shortcut in menu lets you set the scope of your settings. This means you can configure different settings when you are in the HTML designer as opposed to when you are working in the source editor.
VSMouseBindings is available from archive.msdn.microsoft.com/VSMouseBindings.

 

CopySourceAsHTML
Code is exponentially more readable when certain parts of that code are differentiated from the rest by using a different color text. Reading code in Visual Studio is generally much easier than trying to read code in an editor like Notepad.
Chances are you may have your own blog by now, or at least have spent some time reading them. Normally, when you try to post a cool code snippet to your blog it ends up being plain old text, which isn’t the easiest thing to read. This is where the CopySourceAsHTML add-in comes in to play. This add-in allows you to copy code as HTML, meaning you can easily post it to your blog or Web site and retain the coloring applied through Visual Studio.
After installing the CopySourceAsHTML add-in, simply select the code you want to copy and then select the Copy Source as HTML command from the right-click menu. After selecting this option you will see the dialog box shown in Figure 11.

Figure 11 Options for CopySourceAsHTML 

From here you can choose what kind of HTML view you want to create. This can include line numbers, specific tab sizes, and many other settings. After clicking OK, the HTML is saved to the clipboard. For instance, suppose you were starting with the following code snippet inside Visual Studio:

private long Add(int d, int d2) { return (long) d + d2; }
Figure 12 HTML Formatted Code  
After you select Copy As HTML and configure the HTML to include line numbers, this code will look like Figure 12 in the browser. Anything that makes it easier to share and understand code benefits all of us as it means more people will go to the trouble of sharing knowledge and learning from each other.
CopySourceAsHTML was written by Colin Coller and can be downloaded from copysourceashtml.codeplex.com/.

 

Cache Visualizer
Visual Studio 2005 includes a new debugging feature called visualizers, which can be used to create a human-readable view of data for use during the debugging process. Visual Studio 2005 includes a number of debugger visualizers by default, most notably the DataSet visualizer, which provides a tabular interface to view and edit the data inside a DataSet. While the default visualizers are very valuable, perhaps the best part of this new interface is that it is completely extensible. With just a little bit of work you can write your own visualizers to make debugging that much easier.
While a lot of users will write visualizers for their own custom complex types, some developers are already posting visualizers for parts of the Framework. I am going to look at one of the community-built visualizers that is already available and how it can be used to make debugging much easier.
The ASP.NET Cache represents a collection of objects that are being stored for later use. Each object has some settings wrapped around it, such as how long it will be cached for or any cache dependencies. There is no easy way while debugging to get an idea of what is in the cache, how long it will be there, or what it is watching. Brett Johnson saw that gap and wrote Cache Visualizer to examine the ASP.NET cache.
Once you have downloaded and installed the visualizer you will see a new icon appear next to the cache object in your debug windows, as shown in Figure 13.

Figure 13 Selecting Cache Visualizer While Debugging 
When you click on the magnifying glass to use the Cache Visualizer a dialog box appears that includes information about all of the objects currently stored in the ASP. NET cache, as you can see in Figure 14.

Figure 14 Cache Visualizer Shows Objects in the ASP.NET Cache 
Under Public Cache Entries, you can see the entries that I have added to the cache. The entries under Private Cache Entries are ones added by ASP.NET. Note that you can see the expiration information as well as the file dependency for the cache entry.
The Cache Visualizer is a great tool when you are working with ASP.NET. It is also representative of some of the great community-developed visualizers we will see in the future.

 

Wrapping It Up
While this article has been dedicated to freely available add-ins, there are also a host of add-ins that can be purchased for a reasonable price. I encourage you to check out these other options, as in some cases they can add some tremendous functionality to the IDE. This article has been a quick tour of some of the best freely available add-ins for Visual Studio. Each of these add-ins may only do a small thing, but together they help to increase your productivity and enable you to write better code.

Design Pattern Automation

Software development projects are becoming bigger and more complex every day. The more complex a project the more likely the cost of developing and maintaining the software will far outweigh the hardware cost.

There’s a super-linear relationship between the size of software and the cost of developing and maintaining it. After all, large and complex software requires good engineers to develop and maintain it and good engineers are hard to come by and expensive to keep around.

 

Despite the high total cost of ownership per line of code, a lot of boilerplate code still is written, much of which could be avoided with smarter compilers. Indeed, most boilerplate code stems from repetitive implementation of design patterns. But some of these design patterns are so well-understood they could be implemented automatically if we could teach it to compilers.

Implementing the Observer pattern

Take, for instance, the Observer pattern. This design pattern was identified as early as 1995 and became the base of the successful Model-View-Controller architecture. Elements of this pattern were implemented in the first versions of Java (1995, Observable interface) and .NET (2001, INotifyPropertyChanged interface). Although the interfaces are a part of the framework, they still need to be implemented manually by developers.

The INotifyPropertyChanged interface simply contains one event named PropertyChanged, which needs to be signaled whenever a property of the object is set to a different value.

Let’s have a look at a simple example in .NET:

public Person : INotifyPropertyChanged
{

string firstName, lastName;
public event NotifyPropertyChangedEventHandler PropertyChanged;
protected void OnPropertyChanged(string propertyName)
{
if ( this.PropertyChanged != null ) {
this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}
public string FirstName
{
get { return this.firstName; }
set
{
this.firstName = value;
this.OnPropertyChanged(“FirstName”);
this.OnPropertyChanged(“FullName”);
}
public string LastName
{
get { return this.lastName; }
set {
this.lastName = value;
this.OnPropertyChanged(“LastName”);
this.OnPropertyChanged(“FullName”);
}
public string FullName { get { return string.Format( “{0} {1}“, this.firstName, this.lastName); }}}

Properties eventually depend on a set of fields, and we have to raise the PropertyChanged for a property whenever we change a field that affects it.

Shouldn’t it be possible for the compiler to do this work automatically for us? The long answer is detecting dependencies between fields and properties is a daunting task if we consider all corner cases that can happen: properties can have dependencies on fields of other objects, they can call other methods, or even worse, they can call virtual methods or delegates unknown to the compiler. So, there is no general solution to this problem, at least if we expect compilation times in seconds or minutes and not hours or days. However, in real life, a large share of properties is simple enough to be fully understood by a compiler. So the short answer is, yes, a compiler could generate notification code for more than 90% of all properties in a typical application.

In practice, the same class could be implemented as follows:

[NotifyPropertyChanged]
public Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName { get { return string.Format( “{0} {1}“, this.FirstName, this.LastName); }}
}

This code tells the compiler what to do (implement INotifyPropertyChanged) and not how to do it.

Boilerplate Code is an Anti-Pattern

The Observer (INotifyPropertyChanged) pattern is just one example of pattern that usually causes a lot of boilerplate code in large applications. But a typical source base is full of patterns generating a lot of boilerplate. Even if they are not always recognized as “official” design patterns, they are patterns because they are massively repeating among a code base. The most common causes of code repetition are:

  • Tracing, logging
  • Precondition and invariant checking
  • Authorization and audit
  • Locking and thread dispatching
  • Caching
  • Change tracking (for undo/redo)
  • Transaction handling
  • Exception handling

These features are difficult to encapsulate using normal OO techniques and hence why they’re often implemented using boilerplate code. Is that such a bad thing?

Yes.

Addressing cross-cutting concerns using boilerplate code leads to violation of fundamental principles of good software engineering

  • The Single Responsibility Principle is violated when multiple concerns are being implemented in the same method, such as Validation, Security, INotifyPropertyChanged, and Undo/Redo in a single property setter.
  • The Open/Closed Principle, which states that software entities should be open for extension, but closed for modification, is best respected when new features can be added without modifying the original source code.
  • The Don’t Repeat Yourself principle abhors code repetition coming out of manual implementation of design patterns.
  • The Loose Coupling principle is infringed when a pattern is implemented manually because it is difficult to alter the implementation of this pattern. Note that coupling can occur not only between two components, but also between a component and a conceptual design. Trading a library for another is usually easy if they share the same conceptual design, but adopting a different design requires many more modifications of source code.

Additionally, boilerplate renders your code:

  • Harder to read and reason with when trying to understand what it’s doing to address the functional requirement. This added layer of complexity has a huge bearing on the cost of maintenance considering software maintenance consists of reading code 75% of the time!
  • Larger, which means not only lower productivity, but also higher cost of developing and maintaining the software, not counting a higher risk of introducing bugs.
  • Difficult to refactor and change. Changing a boilerplate (fixing a bug perhaps) requires changing all the places where the boilerplate code had been applied. How do you even accurately identify where the boilerplate is used throughout your codebase which potentially span across many solutions and/or repositories? Find-and-replace…?

If left unchecked, boilerplate code has the nasty habit of growing around your code like vine, taking over more space each time it is applied to a new method until eventually you end up with a large codebase almost entirely covered by boilerplate code. In one of my previous teams, a simple data access layer class had over a thousand lines of code where 90% was boilerplate code to handle different types of SQL exceptions and retries.

I hope by now you see why using boilerplate code is a terrible way to implement patterns. It is actually an anti-pattern to be avoided because it leads to unnecessary complexity, bugs, expensive maintenance, loss of productivity and ultimately, higher software cost.

Design Pattern Automation and Compiler Extensions

In so many cases the struggle with making common boilerplate code reusable stems from the lack of native meta-programming support in mainstream statically typed languages such as C# and Java.

The compiler is in possession of an awful lot of information about our code normally outside our reach. Wouldn’t it be nice if we could benefit from this information and write compiler extensions to help with our design patterns?

A smarter compiler would allow for:

  1. Build-time program transformation: to allow us to add features whilst preserving the code semantics and keeping the complexity and number of lines of code in check, so we can automatically implement parts of a design pattern that can be automated;
  2. Static code validation: for build-time safety to ensure we have used the design pattern correctly or to check parts of a pattern that cannot be automated have been implemented according to a set of predefined rules.

Example: ‘using’ and ‘lock’ keywords in C#

If you want proof design patterns can be supported directly by the compiler, there is no need to look further than the using and lock keywords. At first sight, they are purely redundant in the language. But the designers of the language have recognized their importance and have created a specific keyword for them.

Let’s have a look at the using keyword. The keyword is actually a part of the larger Disposable Pattern, composed of the following participants:

  • Resources Objects are objects consuming any external resource, such as a database connection.
  • Resource Consumers are instruction block or objects that consume Resource Objects during a given lifetime.

The Disposable Pattern is ruled by the following principles:

  1. Resource Objects must implement IDisposable.
  2. Implementation of IDisposable.Dispose must be idempotent, i.e. may be safely called several times.
  3. Resource Objects must have a finalizer (called destructor in C++).
  4. Implementation of IDisposable.Dispose must call GC.SuppressFinalize.
  5. Generally, objects that store Resource Objects into their state (field) are also Resource Objects, and children Resource Objects should be disposed by the parent.
  6. Instruction blocks that allocate and consume a Resource Object should be enclosed with the using keyword (unless the reference to the resource is stored in the object state, see previous point).

As you can see, the Disposable Pattern is richer than it appears at first sight. How is this pattern being automated and enforced?

  • The core .NET library provides the IDisposable interface.
  • The C# compiler provides the using keyword, which automates generation of some source code (a try/finally block).
  • FxCop can enforce a rule that says any disposable class also implements a finalizer, and the Dispose method calls GC.SuppressFinalize.

Therefore, the Disposable Pattern is a perfect example of a design pattern directly supported by the .NET platform.

But what about patterns not intrinsically supported? They can be implemented using a combination of class libraries and compiler extensions. Our next example also comes from Microsoft.

Example: Code Contracts

Checking preconditions (and optionally postconditions and invariants) has long been recognized as a best practice to prevent defects in one component causing symptoms in another component. The idea is:

  • every component (every class, typically) should be designed as a “cell”;
  • every cell is responsible for its own health therefore;
  • every cell should check any input it receives from other cells.

Precondition checking can be considered a design pattern because it is a repeatable solution to a recurring problem.

Microsoft Code Contracts (http://msdn.microsoft.com/en-us/devlabs/dd491992.aspx) is a perfect example of design pattern automation. Based on plain-old C# or Visual Basic, it gives you an API for expressing validation rules in the form of pre-conditions, post-conditions, and object invariants. However, this API is not just a class library. It translates into build-time transformation and validation of your program.

I won’t delve into too much detail on Code Contracts; simply put, it allows you to specify validation rules in code which can be checked at build time as well as at run time. For example:

public Book GetBookById(Guid id)
{
Contract.Requires(id != Guid.Empty);
return Dal.Get<Book>(id);
}

public Author GetAuthorById(Guid id)
{
Contract.Requires(id != Guid.Empty);

return Dal.Get<Author>(id);
}

Its binary rewriter can (based on your configurations) rewrite your built assembly and inject additional code to validate the various conditions that you have specified. If you inspect the transformed code generated by the binary rewriter you will see something along the lines of:

  public Book GetBookById(Guid id)
{
if (__ContractsRuntime.insideContractEvaluation <= 4)
{
try
{
++__ContractsRuntime.insideContractEvaluation;
__ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
Guid.Empty"
);
}
finally
{
--__ContractsRuntime.insideContractEvaluation;
}
}
return Dal.Get<Program.Book>(id);
}
public Author GetAuthorById(Guid id)<
{
if (__ContractsRuntime.insideContractEvaluation <= 4)
{
try
{
++__ContractsRuntime.insideContractEvaluation;
__ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
Guid.Empty"
);
}
finally
{
--__ContractsRuntime.insideContractEvaluation;
}
}
return Dal.Get<Program.Author>(id);
}

For more information on Microsoft Code Contracts, please read Jon Skeet’s excellent InfoQ article here (http://www.infoq.com/articles/code-contracts-csharp).

Whilst compiler extensions such as Code Contracts are great, officially supported extensions usually take years to develop, mature, and stabilize. There are so many different domains, each with its own set of problems, it’s impossible for official extensions to cover them all.

What we need is a generic framework to help automate and enforce design patterns in a disciplined way so we are able to tackle domain-specific problems effectively ourselves.

Generic Framework to Automate and Enforce Design Patterns

It may be tempting to see dynamic languages, open compilers (such as Roslyn), or re-compilers (such as Cecil) as solutions because they expose the very details of abstract syntax tree. However, these technologies operate at an excessive level of abstraction, making it very complex to implement any transformation but the simplest ones.

What we need is a high-level framework for compiler extension, based on the following principles:

1. Provide a set of transformation primitives, for instance:

  • intercepting method calls;
  • executing code before and after method execution;
  • intercepting access to fields, properties, or events;
  • introducing interfaces, methods, properties, or events to an existing class.

2. Provide a way to express where primitives should be applied: it’s good to tell the complier extension you want to intercept some methods, but it’s even better if we know which methods should be intercepted!

3. Primitives must be safely composable

It’s natural to want to be able to apply multiple transformations to the same location(s) in our code, so the framework should give us the ability to compose transformations.

When you’re able to apply multiple transformations simultaneously some transformations might need to occur in a specific order in relation to others. Therefore the ordering of transformations needs to follow a well-defined convention but still allow us to override the default ordering where appropriate.

4. Semantics of enhanced code should not be affected

The transformation mechanism should be unobtrusive and leave the original code unaltered as much as possible whilst at the same time providing capabilities to validate the transformations statically. The framework should not make it too easy to “break” the intent of the source code.

5. Advanced reflection and validation abilities

By definition, a design pattern contains rules defining how it should be implemented. For instance, a locking design pattern may define instance fields can only be accessed from instance methods of the same object. The framework must offer a mechanism to query methods accessing a given field, and a way to emit clean build-time errors.

Aspect-Oriented Programming

Aspect-Oriented Programming (AOP) is a programming paradigm that aims to increase modularity by allowing the separation of concerns.

An aspect is a special kind of class containing code transformations (called advices), code matching rules (barbarically called pointcuts), and code validation rules. Design patterns are typically implemented by one or several aspects. There are several ways to apply aspects to code, which greatly depend on each AOP framework. Custom attributes (annotations in Java) are a convenient way to add aspects to hand-picked elements of code More complex pointcuts can be expressed declaratively using XML (e.g. Microsoft Policy Injection Application Block) or a Domain-Specific Language (e.g. AspectJ or Spring), or programmatically using reflection (e.g. LINQ over System.Reflection with PostSharp).

The weaving process combines advice with the original source code at the specified locations (not less barbarically called joinpoints). It has access to meta-data about the original source code so, for compiled languages such as C# or Java, there is opportunity for the static weaver to perform static analysis to ensure the validity of the advice in relation to the pointcuts where they are applied.

Although aspect-oriented programming and design patterns have been independently conceptualized, AOP is an excellent solution to those who seek to automate design patterns or enforce design rules. Unlike low-level metaprogramming, AOP has been designed according to the principles cited above so anyone, and not only compiler specialists, can implement design patterns.

AOP is a programming paradigm and not a technology. As such, it can be implemented using different approaches. AspectJ, the leading AOP framework for Java, is now implemented directly in the Eclipse Java compiler. In .NET, where compilers are not open-source, AOP is best implemented as a re-compiler, transforming the output of the C# or Visual Basic compiler. The leading tool in .NET is PostSharp (see below). Alternatively, a limited subset of AOP can be achieved using dynamic proxies and service containers, and most dependency injection frameworks are able to offer at least method interception aspects.

Example: Custom Design Patterns with PostSharp

PostSharp is a development tool for the automation and enforcement of design patterns in Microsoft .NET and features the most complete AOP framework for .NET.

To avoid turning this article into a PostSharp tutorial, let’s take a very simple pattern: dispatching of method execution back and forth between a foreground (UI) thread and a background thread. This pattern can be implemented using two simple aspects: one that dispatches a method to the background thread, and another that dispatches it to the foreground thread. Both aspects can be compiled by the free PostSharp Express. Let’s look at the first aspect: BackgroundThreadAttribute.

The generative part of the pattern is simple: we just need to create a Task that executes that method, and schedule execution of that Task.

[Serializable] 
public sealed class BackgroundThreadAttribute : MethodInterceptionAspect
{
    public override void OnInvoke(MethodInterceptionArgs args)
{
Task.Run( args.Proceed );
}
}

The MethodInterceptionArgs class contains information about the context in which the method is invoked, such as the arguments and the return value. With this information, you will be able to invoke the original method, cache its return value, log its input arguments, or just about anything that’s required for your use case.

For the validation part of the pattern, we would like to avoid having the custom attribute applied to methods that have a return value or a parameter passed by reference. If this happens, we would like to emit a build-time error. Therefore, we have to implement the CompileTimeValidate method in our BackgroundThreadAttribute class:

// Check that the method returns 'void', has no out/ref argument.
public override bool CompileTimeValidate( MethodBase method ) {

MethodInfo methodInfo = (MethodInfo) method;

if ( methodInfo.ReturnType != typeof(void) ||
methodInfo.GetParameters().Any( p => p.ParameterType.IsByRef ) )
{
ThreadingMessageSource.Instance.Write( method, SeverityType.Error, "THR006",
method.DeclaringType.Name, method.Name );
return false;
}

return true;
}

The ForegoundThreadAttribute would look similar, using the Dispatcher object in WPF or the BeginInvoke method in WinForms.

The above aspect can be applied just like any other attributes, for example:

[BackgroundThread]
private static void ReadFile(string fileName)
{
DisplayText( File.ReadAll(fileName) );
}
[ForegroundThread] private void DisplayText( string content ) { this.textBox.Text = content;
}

The resulting source code is much cleaner than what we would get by directly using tasks and dispatchers.

One may argue that C# 5.0 addresses the issue better with the async and await keywords. This is correct, and is a good example of the C# team identifying a recurring problem that they decided to address with a design pattern implemented directly in the compiler and in core class libraries. While the .NET developer community had to wait until 2012 for this solution, PostSharp offered one as early as 2006.

How long must the .NET community wait for solutions to other common design patterns, for instance INotifyPropertyChanged? And what about design patterns that are specific to your company’s application framework?

Smarter compilers would allow you to implement your own design patterns, so you would not have to rely on the compiler vendor to improve the productivity of your team.

Downsides of AOP

I hope by now you are convinced that AOP is a viable solution to automate design patterns and enforce good design, but it’s worth bearing in mind that there are several downsides too:

1. Lack of staff preparation

As a paradigm, AOP is not taught in undergraduate programs, and it’s rarely touched at master level. This lack of education has contributed towards a lack of general awareness about AOP amongst the developer community.

Despite being 20 years old, AOP is misperceived as a ‘new’ paradigm which often proves to be the stumbling block for adoption for all but the most adventurous development teams.

Design patterns are almost the same age, but the idea that design patterns can be automated and validated is recent. We cited some meaningful precedencies in this article involving the C# compiler, the .NET class library, and Visual Studio Code Analysis (FxCop), but these precedencies have not been generalized into a general call for design pattern automation.

2. Surprise factor

Because staffs and students alike are not well prepared, there can be an element of surprise when they encounter AOP because the application has additional behaviors that are not directly visible from source code. Note: what is surprising is the intended effect of AOP, that the compiler is doing more than usual, and not any side effect.

There can also be some surprise of an unintended effect, when a bug in the use of an aspect (or in a pointcut) causes the transformation to be applied to unexpected classes and methods. Debugging such errors can be subtle, especially if the developer is not aware that aspects are being applied to the project.

These surprise factors can be addressed by:

  • IDE integration, which helps to visualize (a) which additional features have been applied to the source displayed in the editor and (b) to which elements of code a given aspect has been applied. At time of writing only two AOP frameworks provide correct IDE integration: AspectJ (with the AJDT plug-in for Eclipse) and PostSharp (for Visual Studio).
  • Unit testing by the developer – aspects, as well as the fact that aspects have been applied properly, must be unit tested as any other source code artifact.
  • Not relying on naming conventions when applying aspects to code, but instead relying on structural properties of the code such as type inheritance or custom attributes. Note that this debate is not unique to AOP: convention-based programming has been recently gaining momentum, although it is also subject to surprises.

3. Politics

Use of design pattern automation is generally a politically sensitive issue because it also addresses separation of concerns within a team. Typically, senior developers will select design patterns and implement aspects, and junior developers will use them. Senior developers will write validation rules to ensure hand-written code respects the architecture. The fact that junior developers don’t need to understand the whole code base is actually the intended effect.

This argument is typically delicate to tackle because it takes the point of view of a senior manager, and may injure the pride of junior developers.

Ready-Made Design Pattern Implementation with PostSharp Pattern Libraries

As we’ve seen with the Disposable Pattern, even seemingly simple design patterns can actually require complex code transformation or validation. Some of these transformations and validations are complex but still possible to implement automatically. Others can be too complex for automatic processing and must be done manually.

Fortunately, there are also simple design patterns that can be automated easily by anyone (exception handling, transaction handling, and security) with an AOP framework.

After many years of market experience, the PostSharp team began to provide highly sophisticated and optimized ready-made implementations of the most common design patterns after they realized most customers were implementing the same aspects over and over again.

PostSharp currently provides ready-made implementations for the following design patterns:

  • Multithreading: reader-writer-synchronized threading model, actor threading model, thread-exclusive threading model, thread dispatching;
  • Diagnostics: high-performance and detailed logging to a variety of back-ends including NLog and Log4Net;
  • INotifyPropertyChanged: including support for composite properties and dependencies on other objects;
  • Contracts: validation of parameters, fields, and properties.

Now, with ready-made implementations of design patterns, teams can start enjoying the benefits of AOP without learning AOP.

Summary

So-called high-level languages such as Java and C# still force developers to write code at an irrelevant level of abstraction. Because of the limitations of mainstream compilers, developers are forced to write a lot of boilerplate code, adding to the cost of developing and maintaining applications. Boilerplate stems from massive implementation of patterns by hand, in what may be the largest use of copy-paste inheritance in the industry.

The inability to automate design pattern implementation probably costs billions to the software industry, not even counting the opportunity cost of having qualified software engineers spending their time on infrastructure issues instead of adding business value.

However, a large amount of boilerplate could be removed if we had smarter compilers to allow us to automate implementation of the most common patterns. Hopefully, future language designers will understand design patterns are first-class citizens of modern application development, and should have appropriate support in the compiler.

But actually, there is no need to wait for new compilers. They already exist, and are mature. Aspect-oriented programming was specifically designed to address the issue of boilerplate code. Both AspectJ and PostSharp are mature implementations of these concepts, and are used by the largest companies in the world. And both PostSharp and Spring Roo provide ready-made implementations of the most common patterns. As always, early adopters can get productivity gains several years before the masses follow.

NEW “Filter My Lists” Web Part now available + FREE Metro UI Master Page when ordering

“Filter My Lists” Web Part

Saves you time with optimal performance

Find what you are looking for with a few clicks, even in cluttered sites and lists with masses of items and documents.

Find exactly what you need and stop wasting your time browsing SharePoint.
Filter the content of multiple lists and libraries in a single   step.

Combine search and metadata filters

In a single panel combine item, document and attachment searches with metadata keyword searches and managed metadata filters.

Select multiple filter values from drop-down lists or alternatively use the keyword search of metadata fields with the help of wildcard characters and logical operators.

Export filtered views to Excel

Export filtered views and data to Excel. A print view enables you to print your results in a clear printable format with a single  click.

Keep views clear and concise

Provides a complete set of filters without cluttering list views and keeps your list views clear, concise and speedy. Enables you to filter SharePoint using columns which aren’t visible in list views.

Refine filters and save them for future use, whether private, to share with others or to use as default filters.

FREE Metro Style UI Master Page

 

Screen Capture Medium

Modern UI Master Page and Styles for SharePoint 2010.

This will give the Metro/Modern UI styling of SharePoint 2013 to your SharePoint 2010 team sites.

Features include:
– Quick launch styling
– Global navigation and drop-down styling
– Search box styling and layout change
– Web part header styling
– Segoe UI font

Architecture and Practical Application – BizTalk Adapter for mySAP Business Suite

Architecture for BizTalk Adapter for mySAP Business Suite

f36f4-netvjava

The Microsoft BizTalk Adapter for mySAP Business Suite implements a Windows Communication Foundation (WCF) custom binding, which contains a single custom transport binding element that enables communication with an SAP system.

biztalk-accelerator

The SAP adapter is wrapped by the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK runtime and is exposed to applications through the WCF channel architecture. The SAP adapter communicates with the SAP system through either the 64-bit or 32-bit version of the SAP Unicode RFC SDK (librfc32u.dll).

The following figure shows the end-to-end architecture for solutions that are developed by using the SAP adapter.
SAP End-to-End Architecture
Consuming the Adapter

The SAP adapter exposes the SAP system as a WCF service to client applications. Client applications exchange SOAP messages with the SAP adapter through WCF channels to perform operations and to access data on the SAP system. The preceding figure shows four ways in which the SAP adapter can be consumed.

image
• Through a WCF channel application that performs operations on the SAP system by using the WCF channel model to exchange SOAP messages directly with the SAP adapter. For more information about developing solutions for the SAP adapter by using WCF channel model programming, see Developing Applications by Using the WCF Channel Model.

• Through a WCF service model application that calls methods on a WCF client to perform operations on the SAP system. A WCF client models the operations exposed by the SAP adapter as .NET methods. You can use the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK or the svcutil.exe tool to create a WCF client class from metadata exposed by the SAP adapter. For more information about WCF service model programming and the SAP adapter, see Developing Applications by Using the WCF Service Model.

• Through a BizTalk port that is configured to use the BizTalk WCF-Custom adapter with the SAP Binding configured as the binding for the WCF-Custom transport type in a BizTalk Server application. The BizTalk WCF-Custom adapter enables communication between a BizTalk Server application and a WCF service.

The BizTalk WCF-Custom adapter supports custom WCF bindings through its WCF-Custom transport type, which enables you to configure any WCF binding exposed to the configuration system as the binding used by the BizTalk WCF-Custom adapter. For more information about how to use the SAP adapter in BizTalk Server solutions, see Developing BizTalk Applications. BizTalk transactions are supported by the BizTalk Layered Channel binding element which can be loaded by setting a binding property on the SAP Binding.

• Through an IIS-hosted Web service. In this scenario, the SAP adapter is exposed through a WCF Service proxy, which is hosted in IIS by using one of the standard WCF HTTP bindings.

• Through the .NET Framework Data Provider for mySAP Business Suite. The Data Provider for SAP runs on top of the SAP adapter and provides an ADO.NET interface to an SAP system.

The SAP adapter and the SAP RFC library are always hosted in-process with the application or service that consumes the adapter.

The SAP Adapter and WCF

WCF presents a programming model based on the exchange of SOAP messages over channels between clients and services. These messages are sent between endpoints exposed by a communicating client and service.

An endpoint consists of an endpoint address which specifies the location at which messages are received, a binding which specifies the communication protocols used to exchange messages, and a contract which specifies the operations and data types exposed by the endpoint.

A binding consists of one or more binding elements that stack on top of each other to define how messages are exchanged with the endpoint.

 

At a minimum, a binding must specify the transport and encoding used to exchange messages with the endpoint. Message exchange between endpoints occurs over a channel stack that is composed of one or more channels. Each channel is a concrete implementation of one of the binding elements in the binding configured for the endpoint.

For more information about WCF and the WCF programming model, see “Windows Communication Foundation” at http://go.microsoft.com/fwlink/?LinkId=89726.

The Microsoft BizTalk Adapter for mySAP Business Suite exposes a WCF custom binding, the SAP Binding (Microsoft.Adapters.SAP.SAPBinding). By default, this binding contains a single custom transport binding element, the SAP Adapter Binding Element (Microsoft.Adapters.SAP.SAPAdapter), which enables operations on an SAP system. When using the SAP adapter with BizTalk Server, you can set the EnableBizTalkCompatibilityMode binding property to load a custom binding element, the BizTalk Layered Channel Binding Element, on top of the SAP Adapter Binding Element. The BizTalk Layered Channel Binding Element is implemented internally by the SAP adapter and is not exposed outside the SAP Binding.

Microsoft.Adapters.SAP.SAPBinding (the SAP Binding) and Microsoft.Adapters.SAP.SAPAdapter (the SAP Adapter Binding Element) are public classes and are also exposed to the configuration system. Because the SAP Adapter Binding Element is exposed publicly, you can build your own custom WCF bindings capable of extending the functionality of the SAP adapter. For example, you could implement a custom binding to support Enterprise Single Sign-On (SSO) in a WCF channel or a WCF service model programming solution, to aggregate database operations into a single multifunction operation, or to perform schema transformation between operations implemented by a custom application and operations on the SAP system.

The SAP adapter is built on top of the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK and runs on top of the WCF LOB Adapter SDK runtime. The WCF LOB Adapter SDK provides a software framework and tooling infrastructure that the SAP adapter leverages to provide a rich set of features to users and adapter clients.

The Connection to the SAP System

The SAP adapter connects with the SAP system through the SAP Unicode RFC SDK Library (librfc32u.dll). The SAP adapter supports both the 32 bit and the 64 bit versions of the SAP RFC SDK. The SAP RFC SDK enables external programs to call ABAP functions on a SAP system.

You establish a connection to an SAP system by providing a connection URI to the SAP adapter. The SAP adapter supports the following kinds of connections to an SAP system:
• An application host–based connection (A), in which the SAP adapter connects directly to an SAP application server.

• A load balancing connection (B), in which the SAP adapter connects to an SAP messaging server.

• A destination-based connection (D), in which the connection to the SAP system is specified by a destination in the saprfc.ini configuration file. A, B, and R type connections are supported.

• A listener connection (R), in which the adapter receives RFCs, tRFC and IDOCs through an RFC Destination on the SAP system that is specified by a listener host, a listener gateway service, and a listener program ID, either directly in the connection URI or by an R-based destination in the saprfc.ini configuration file.

Architecture for BizTalk Adapter for mySAP Business Suite

This topic has not yet been rated – Rate this topic

The Microsoft BizTalk Adapter for mySAP Business Suite implements a Windows Communication Foundation (WCF) custom binding, which contains a single custom transport binding element that enables communication with an SAP system. The SAP adapter is wrapped by the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK runtime and is exposed to applications through the WCF channel architecture. The SAP adapter communicates with the SAP system through either the 64-bit or 32-bit version of the SAP Unicode RFC SDK (librfc32u.dll). The following figure shows the end-to-end architecture for solutions that are developed by using the SAP adapter.
SAP End-to-End Architecture
Consuming the Adapter

The SAP adapter exposes the SAP system as a WCF service to client applications. Client applications exchange SOAP messages with the SAP adapter through WCF channels to perform operations and to access data on the SAP system. The preceding figure shows four ways in which the SAP adapter can be consumed.
• Through a WCF channel application that performs operations on the SAP system by using the WCF channel model to exchange SOAP messages directly with the SAP adapter. For more information about developing solutions for the SAP adapter by using WCF channel model programming, see Developing Applications by Using the WCF Channel Model.

• Through a WCF service model application that calls methods on a WCF client to perform operations on the SAP system. A WCF client models the operations exposed by the SAP adapter as .NET methods. You can use the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK or the svcutil.exe tool to create a WCF client class from metadata exposed by the SAP adapter. For more information about WCF service model programming and the SAP adapter, see Developing Applications by Using the WCF Service Model.

• Through a BizTalk port that is configured to use the BizTalk WCF-Custom adapter with the SAP Binding configured as the binding for the WCF-Custom transport type in a BizTalk Server application. The BizTalk WCF-Custom adapter enables communication between a BizTalk Server application and a WCF service.

The BizTalk WCF-Custom adapter supports custom WCF bindings through its WCF-Custom transport type, which enables you to configure any WCF binding exposed to the configuration system as the binding used by the BizTalk WCF-Custom adapter. For more information about how to use the SAP adapter in BizTalk Server solutions, see Developing BizTalk Applications.

BizTalk transactions are supported by the BizTalk Layered Channel binding element which can be loaded by setting a binding property on the SAP Binding.

• Through an IIS-hosted Web service. In this scenario, the SAP adapter is exposed through a WCF Service proxy, which is hosted in IIS by using one of the standard WCF HTTP bindings.

• Through the .NET Framework Data Provider for mySAP Business Suite. The Data Provider for SAP runs on top of the SAP adapter and provides an ADO.NET interface to an SAP system.

The SAP adapter and the SAP RFC library are always hosted in-process with the application or service that consumes the adapter.

The SAP Adapter and WCF

WCF presents a programming model based on the exchange of SOAP messages over channels between clients and services. These messages are sent between endpoints exposed by a communicating client and service.

An endpoint consists of an endpoint address which specifies the location at which messages are received, a binding which specifies the communication protocols used to exchange messages, and a contract which specifies the operations and data types exposed by the endpoint. A binding consists of one or more binding elements that stack on top of each other to define how messages are exchanged with the endpoint.

At a minimum, a binding must specify the transport and encoding used to exchange messages with the endpoint. Message exchange between endpoints occurs over a channel stack that is composed of one or more channels. Each channel is a concrete implementation of one of the binding elements in the binding configured for the endpoint.

For more information about WCF and the WCF programming model, see “Windows Communication Foundation” at http://go.microsoft.com/fwlink/?LinkId=89726.

The Microsoft BizTalk Adapter for mySAP Business Suite exposes a WCF custom binding, the SAP Binding (Microsoft.Adapters.SAP.SAPBinding). By default, this binding contains a single custom transport binding element, the SAP Adapter Binding Element (Microsoft.Adapters.SAP.SAPAdapter), which enables operations on an SAP system. When using the SAP adapter with BizTalk Server, you can set the EnableBizTalkCompatibilityMode binding property to load a custom binding element, the BizTalk Layered Channel Binding Element, on top of the SAP Adapter Binding Element. The BizTalk Layered Channel Binding Element is implemented internally by the SAP adapter and is not exposed outside the SAP Binding.

Microsoft.Adapters.SAP.SAPBinding (the SAP Binding) and Microsoft.Adapters.SAP.SAPAdapter (the SAP Adapter Binding Element) are public classes and are also exposed to the configuration system. Because the SAP Adapter Binding Element is exposed publicly, you can build your own custom WCF bindings capable of extending the functionality of the SAP adapter. For example, you could implement a custom binding to support Enterprise Single Sign-On (SSO) in a WCF channel or a WCF service model programming solution, to aggregate database operations into a single multifunction operation, or to perform schema transformation between operations implemented by a custom application and operations on the SAP system.

The SAP adapter is built on top of the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK and runs on top of the WCF LOB Adapter SDK runtime. The WCF LOB Adapter SDK provides a software framework and tooling infrastructure that the SAP adapter leverages to provide a rich set of features to users and adapter clients.

The Connection to the SAP System

The SAP adapter connects with the SAP system through the SAP Unicode RFC SDK Library (librfc32u.dll). The SAP adapter supports both the 32 bit and the 64 bit versions of the SAP RFC SDK. The SAP RFC SDK enables external programs to call ABAP functions on a SAP system.

You establish a connection to an SAP system by providing a connection URI to the SAP adapter. The SAP adapter supports the following kinds of connections to an SAP system:
• An application host–based connection (A), in which the SAP adapter connects directly to an SAP application server.

• A load balancing connection (B), in which the SAP adapter connects to an SAP messaging server.

• A destination-based connection (D), in which the connection to the SAP system is specified by a destination in the saprfc.ini configuration file. A, B, and R type connections are supported.

• A listener connection (R), in which the adapter receives RFCs, tRFC and IDOCs through an RFC Destination on the SAP system that is specified by a listener host, a listener gateway service, and a listener program ID, either directly in the connection URI or by an R-based destination in the saprfc.ini configuration file.

So – How Do I Use a Custom Web Part?

This topic has not yet been rated – Rate this topic

This section provides information about using a custom Web Part with Microsoft Office SharePoint Server. To use a custom Web Part, you must do the following:
1. Create a custom Web Part

  1. Deploy the custom Web Part to a SharePoint portal
  2. Configure the SharePoint portal to use the custom Web Part

Before You Begin

Before you create a custom Web Part:
• Publish the SAP artifacts as a WCF service. For more information, see Step 1: Publish the SAP Artifacts as a WCF Service in Tutorial 1: Presenting Data from an SAP System on a SharePoint Site.

• Create an application definition file for the SAP artifacts using the Business Data Catalog in Microsoft Office SharePoint Server. For more information, see Step 2: Create an Application Definition File for the SAP Artifacts in Tutorial 1: Presenting Data from an SAP System on a SharePoint Site.

Step 1: Create a custom Web Part

To create a custom Web Part using Visual Studio, do the following:
1. Start Visual Studio, and then create a project.

  1. In the New Project dialog box, from the Project types pane, select Visual C#. From the Templates pane, select Class Library.
  • Specify a name and location for the solution. For this topic, specify CustomWebPart in the Name and Solution Name boxes. Specify a location, and then click OK.

  • Add a reference to the System.Web component into the project. Right-click the project name in Solution Explorer, and then click Add Reference. In the Add Reference dialog box, select System.Web in the .NET tab, and then click OK. The System.Web component contains the required namespace of System.Web.UI.WebControls.WebParts.

  • Add the required code based on your issue in the project. For the code sample that is relevant to a certain issue, see “Issues Involving Custom Web Parts” in Considerations While Using the SAP Adapter with Microsoft Office SharePoint Server.

  • Build the project. On successful build of the project, a .dll file, CustomWebPart.dll, will be generated in the /bin/Debug folder.

  • Only for 64-bit computer: Sign the CustomWebPart.dll file with a strong name before performing the following steps. Otherwise, you will not be able to import, and hence use the CustomWebPart.dll in the SharePoint portal in “Step 3: Configure the SharePoint Portal to use the custom Web Part.” For information about how to sign an assembly with a strong name, see http://go.microsoft.com/fwlink/?LinkId=197171.

  • Step 2: Deploy the custom Web Part to a SharePoint Portal

    You must do the following to make the CustomWebPart.dll file (custom Web Part) that is created in “Step 1: Create a custom Web Part” of this topic usable on the SharePoint portal:
    • Copy the CustomWebPart.dll file to the bin folder of the SharePoint Portal: Microsoft Office SharePoint Server creates portals under the :\Inetpub\wwwroot\wss\VirtualDirectories folder. A folder is created for each portal, and can be identified with the port number. You must copy the CustomWebPart.dll file created in “Step 1: Create a custom Web Part” of this topic to the :\Inetpub\wwwroot\wss\VirtualDirectories\bin folder. For example, if the port number of your SharePoint portal is 13614, you must copy the CustomWebPart.dll file to the :\Inetpub\wwwroot\wss\VirtualDirectories\13614\bin folder.

    TipTip

    Another way to find the folder location of your SharePoint portal is by using the Internet Information Services (IIS) Manager window (Start > Run > inetmgr). Locate your SharePoint portal in the Internet Information Services (IIS) Manager window ([computer_name] > Web Sites > [Portal-Name]), right-click, and then click Properties in the shortcut menu. In the properties dialog box of the SharePoint portal, click the Home Directory tab, and then select the Local path box.

    • Add the Safe Control Entry in the web.config File: Because the CustomWebPart.dll file will be used on different computers and by multiple users, you must declare the file as “safe.” To do so, open the web.config file located in the SharePoint portal folder at :\Inetpub\wwwroot\wss\VirtualDirectories. Under the section of the web.config file, add the following safe control entry:

    ◦On 32-bit computer:

    Copy

     

    ◦On 64-bit computer:

    Copy

     

    Save the web.config file, and then close it.

    Step 3: Configure the SharePoint portal to use the custom Web Part

    You need to add the custom Web Part to the Microsoft Office SharePoint Server Web Part Gallery, so that you can use it on your SharePoint portal. To do so:

    1. Start SharePoint 3.0 Central Administration. Click Start, point to All Programs, point to Microsoft Office Server, and then click SharePoint 3.0 Central Administration.
  • In the left navigation pane, click the name of the Shared Service Provider (SSP) to which you want to add the custom Web Part.

  • On the Shared Services Administration page, in the upper-right corner, click Site Actions, and then click Create.

  • On the Site Settings page, click Web Parts under the Galleries column.

  • On the Web Part Gallery page, to add the custom Web Part to the gallery, click New. At this point the custom Web Part is not available in the Web Part Gallery page.

  • On the New Web Parts page, locate CustomWebPart (name of the custom Web Part) in the list, select the check box on the left, and then click Populate Gallery on the top of the page. This will add the CustomWebPart entry in the Web Part Gallery page.

  • Now you can use the custom Web Part (CustomWebPart) to create Web Parts in your SharePoint portal. The custom Web Part (CustomWebPart) will appear under the Miscellaneous section in the Add Web Parts page.

     

    Expand

    BizTalk Adapter for mySAP Business Suite and the WCF LOB Adapter SDK

    This topic has not yet been rated Rate this topic

    The Microsoft BizTalk Adapter for mySAP Business Suite implements a set of core components that leverage functionality provided by the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK and provide connectivity to the SAP system through the SAP Unicode RFC SDK Library (librfc32u.dll).

    The WCF LOB Adapter SDK serves as the software layer through which the SAP adapter interfaces with Windows Communication Foundation (WCF), and the RFC SDK serves as the layer through which the SAP adapter interfaces with the SAP system.

    The following figure shows the relationships between the internal components of the SAP adapter and between these components and the RFC SDK.

    The relationship of internal adapter components

    See

     

    SAP Weekend : Part 2 – Using the Microsoft BizTalk Server for B2B Integration with SharePoint

    This is Part 2 of my past weekend’s activities with SharePoint and SAP Integration methods.

     

    In this post I am looking at how to use the BizTalk Adapter with SharePoint

     

    Topics

    • Abstract
    • Goal
    • Business Scenario
    • Environment
    • Document Flow
    • Integration Steps
    • .NET Support
    • Summary

     

    Abstract

    In the past few years, the whole perspective of doing business has been moved towards implementing Enterprise Resource Planning Systems for the key areas like marketing, sales and manufacturing operations. Today most of the large organizations which deal with all major world markets, heavily rely on such key areas.

    Operational Systems of any organization can be achieved from its worldwide network of marketing teams as well as from manufacturing and distribution techniques. In order to provide customers with realistic information, each of these systems need to be integrated as part of the larger enterprise.

    This ultimately results into efficient enterprise overall, providing more reliable information and better customer service. This paper addresses the integration of Biztalk Server and Enterprise Resource Planning System and the need for their integration and their role in the current E-Business scenario.

     

    Goal

    There are several key business drivers like customers and partners that need to communicate on different fronts for successful business relationship. To achieve this communication, various systems need to get integrated that lead to evaluate and develop B2B Integration Capability and E–Business strategy. This improves the quality of business information at its disposal—to improve delivery times, costs, and offer customers a higher level of overall service.

    To provide B2B capabilities, there is a need to give access to the business application data, providing partners with the ability to execute global business transactions. Facing internal integration and business–to–business (B2B) challenges on a global scale, organization needs to look for required solution.

    To integrate the worldwide marketing, manufacturing and distribution facilities based on core ERP with variety of information systems, organization needs to come up with strategic deployment of integration technology products and integration service capabilities.

     

    Business Scenario

    Now take the example of this ABC Manufacturing Company: whose success is the strength of its European-wide trading relationships. Company recognizes the need to strengthen these relationships by processing orders faster and more efficiently than ever before.

    The company needed a new platform that could integrate orders from several countries, accepting payments in multiple currencies and translating measurements according to each country’s standards. Now, the bottom line for ABC’s e-strategy was to accelerate order processing. To achieve this: the basic necessity was to eliminate the multiple collections of data and the use of invalid data.

    By using less paper, ABC would cut processing costs and speed up the information flow. Keeping this long term goal in mind, ABC Manufacturing Company can now think of integrating its four key countries into a new business-to-business (B2B) platform.

     

    Here is another example of this XYZ Marketing Company. Users visit on this company’s website to explore a variety of products for its thousands of customers all over the world. Now this company always understood that they could offer greater benefits to customers if they could more efficiently integrate their customers’ back-end systems. With such integration, customers could enjoy the advantages of highly efficient e-commerce sites, where a visitor on the Web could place an order that would flow smoothly from the website to the customer’s order entry system.

     

    Some of those back-end order entry systems are built on the latest, most sophisticated enterprise resource planning (ERP) system on the market, while others are built on legacy systems that have never been upgraded. Different customers requires information formatted in different ways, but XYZ has no elegant way to transform the information coming out of website to meet customer needs. With the traditional approach:

    For each new e-commerce customer on the site, XYZ’s staff needs to work for significant amounts of time creating a transformation application that would facilitate the exchange of information. But with better approach: XYZ needs a robust messaging solution that would provide the flexibility and agility to meet a range of customer needs quickly and effectively. Now again XYZ can think of integrating Customer Backend Systems with the help of business-to-business (B2B) platform.

     

    Environment

    Many large scale organizations maintain a centralized SAP environment as its core enterprise resource planning (ERP) system. The SAP system is used for the management and processing of all global business processes and practices. B2B integration mainly relies on the asynchronous messaging, Electronic Data Interchange (EDI) and XML document transformation mechanisms to facilitate the transformation and exchange of information between any ERP System and other applications including legacy systems.

    For business document routing, transformation, and tracking, existing SAP-XML/EDI technology road map needs XML service engine. This will allow development of complex set of mappings from and to SAP to meet internal and external XML/EDI technology and business strategy. Microsoft BizTalk Server is the best choice to handle the data interchange and mapping requirements. BizTalk Server has the most comprehensive development and management support among business-to-business platforms. Microsoft BizTalk Server and BizTalk XML Framework version 2.0 with Simple Object Access Protocol (SOAP) version 1.1 provide precisely the kind of messaging solution that is needed to facilitate integration with cost effective manner.

     

    Document Flow

    Friends, now let’s look at the actual flow of document from Source System to Customer Target System using BizTalk Server. When a document is created, it is sent to a TCP/IP-based Application Linking and Enabling (ALE) port—a BizTalk-based receive function that is used for XML conversion. Then the document passes the XML to a processing script (VBScript) that is running as a BizTalk Application Integration Component (AIC). The following figure shows how BizTalk Server acts as a hub between applications that reside in two different organizations:

    The data is serialized to the customer/vendor XML format using the Extensible Stylesheet Language Transformations (XSLT) generated from the BizTalk Mapper using a BizTalk channel. The XML document is sent using synchronous Hypertext Transfer Protocol Secure (HTTPS) or another requested transport protocol such as the Simple Mail Transfer Protocol (SMTP), as specified by the customer.

    The following figure shows steps for XML document transformation:

    The total serialized XML result is passed back to the processing script that is running as a BizTalk AIC. An XML “receipt” document then is created and submitted to another BizTalk channel that serializes the XML status document into a SAP IDOC status message. Finally, a Remote Function Call (RFC) is triggered to the SAP instance/client using a compiled C++/VB program to update the SAP IDOC status record. A complete loop of document reconciliation is achieved. If the status is not successful, an e-mail message is created and sent to one of the Support Teams that own the customer/vendor business XML/EDI transactions so that the conflict can be resolved. All of this happens instantaneously in a completely event-driven infrastructure between SAP and BizTalk.

    Integration Steps

    Let’s talk about a very popular Order Entry and tracking scenario while discussing integration hereafter. The following sections describe the high-level steps required to transmit order information from Order Processing pipeline Component into the SAP/R3 application, and to receive order status update information from the SAP/R3 application.

    The integration of AFS purchase order reception with SAP is achieved using the BizTalk Adapter for SAP (BTS-SAP). The IDOC handler is used by the BizTalk Adapter to provide the transactional support for bridging tRFC (Transactional Remote Function Calls) to MSMQ DTC (Distributed Transaction Coordinator). The IDOC handler is a COM object that processes IDOC documents sent from SAP through the Com4ABAP service, and ensures their successful arrival at the appropriate MSMQ destination. The handler supports the methods defined by the SAP tRFC protocol. When integrating purchase order reception with the SAP/R3 application, BizTalk Server (BTS) provides the transformation and messaging functionality, and the BizTalk Adapter for SAP provides the transport and routing functionality.

    The following two sequential steps indicate how the whole integration takes place:

    • Purchase order reception integration
    • Order Status Update Integration

    Purchase Order Reception Integration

    1. Suppose a new pipeline component is added to the Order Processing pipeline. This component creates an XML document that is equivalent to the OrderForm object that is passed through the pipeline. This XML purchase order is in Commerce Server Order XML v1.0 format, and once created, is sent to a special Microsoft Message Queue (MSMQ) queue created specifically for this purpose.Writing the order from the pipeline to MSMQ:>

      The first step in sending order data to the SAP/R3 application involves building a new pipeline component to run within the Order Processing pipeline. This component must perform the following two tasks:

      A] Make an XML-formatted copy of the OrderForm object that is passing through the order processing pipeline. The GenerateXMLForDictionaryUsingSchema method of the DictionaryXMLTransforms object is used to create the copy.

      Private Function IPipelineComponent_Execute(ByVal objOrderForm As Object, _
          ByVal objContext As Object, ByVal lFlags As Long) As Long
      
      On Error GoTo ERROR_Execute
      
      Dim oXMLTransforms As Object
      Dim oXMLSchema As Object
      Dim oOrderFormXML As Object
      
      ' Return 1 for Success.
      IPipelineComponent_Execute = 1
      
      ' Create a DictionaryXMLTransforms object.
      Set oXMLTransforms = CreateObject("Commerce.DictionaryXMLTransforms")
      
      ' Create a PO schema object.
      Set oXMLSchema = oXMLTransforms.GetXMLFromFile(sSchemaLocation)
      
      ' Create an XML version of the order form.
      Set oOrderFormXML = oXMLTransforms.GenerateXMLForDictionaryUsingSchema_
          (objOrderForm, oXMLSchema)
      
      WritePO2MSMQ sQueueName, oOrderFormXML.xml, PO_TO_ERP_QUEUE_LABEL, _
          sBTSServerName, AFS_PO_MAXTIMETOREACHQUEUE
      
      Exit Function
      
      ERROR_Execute:
      App.LogEvent "QueuePO.CQueuePO -> Execute Error: " & _
      vbCrLf & Err.Description, vbLogEventTypeError
      
      ' Set warning level.
      IPipelineComponent_Execute = 2
      Resume Next
      
      End Function

      B] Send the newly created XML order document to the MSMQ queue defined for this purpose.

      Option Explicit
      
      ' MSMQ constants.
      
      ' Access modes.
      Const MQ_RECEIVE_ACCESS = 1
      Const MQ_SEND_ACCESS = 2
      Const MQ_PEEK_ACCESS = 32
      
      ' Sharing modes. Const MQ_DENY_NONE = 0
      Const MQ_DENY_RECEIVE_SHARE = 1
      
      ' Transaction options. Const MQ_NO_TRANSACTION = 0
      Const MQ_MTS_TRANSACTION = 1
      Const MQ_XA_TRANSACTION = 2
      Const MQ_SINGLE_MESSAGE = 3
      
      ' Error messages.
      Const MQ_ERROR_QUEUE_NOT_EXIST = -1072824317
      
      ' MQ Message ACKNOWLEDGEMENT.
      Const MQMSG_ACKNOWLEDGMENT_FULL_REACH_QUEUE = 5
      Const MQMSG_ACKNOWLEDGMENT_FULL_RECEIVE = 14
      Const DEFAULT_MAX_TIME_TO_REACH_QUEUE = 20
      ' MQ Message ACKNOWLEDGEMENT.
      Const MQMSG_ACKNOWLEDGMENT_FULL_REACH_QUEUE = 5
      Const MQMSG_ACKNOWLEDGMENT_FULL_RECEIVE = 14
      
      Function WritePO2MSMQ(sQueueName As String, sMsgBody As String, _
          sMsgLabel As String, sServerName As String, _
          Optional MaxTimeToReachQueue As Variant) As Long
      
      Dim lMaxTime As Long
      
      If IsMissing(MaxTimeToReachQueue) Then
      lMaxTime = DEFAULT_MAX_TIME_TO_REACH_QUEUE
      Else
      lMaxTime = MaxTimeToReachQueue
      End If
      
      Dim objQueueInfo As MSMQ.MSMQQueueInfo
      Dim objQueue As MSMQ.MSMQQueue, objAdminQueue As MSMQ.MSMQQueue
      Dim objQueueMsg As MSMQ.MSMQMessage
      
      On Error GoTo MSMQ_Error
      
      Set objQueueInfo = New MSMQ.MSMQQueueInfo
      objQueueInfo.FormatName = "DIRECT=OS:" & sServerName & "\PRIVATE$\" & sQueueName
      
      Set objQueue = objQueueInfo.Open(MQ_SEND_ACCESS, MQ_DENY_NONE)
      
      Set objQueueMsg = New MSMQ.MSMQMessage
      
      objQueueMsg.Label = sMsgLabel ' Set the message label property
      objQueueMsg.Body = sMsgBody ' Set the message body property
      objQueueMsg.Ack = MQMSG_ACKNOWLEDGMENT_FULL_REACH_QUEUE
      objQueueMsg.MaxTimeToReachQueue = lMaxTime
      
      objQueueMsg.send objQueue, MQ_SINGLE_MESSAGE
      
      objQueue.Close
      
      On Error Resume Next
      Set objQueueMsg = Nothing
      Set objQueue = Nothing
      Set objQueueInfo = Nothing
      
      Exit Function
      
      MSMQ_Error:
      App.LogEvent "Error in WritePO2MSMQ: " & Error
      Resume Next
      
      End Function
      
    2. A BTS MSMQ receive function picks up the document from the MSMQ queue and sends it to a BTS channel that has been configured for this purpose. Receiving the XML order from MSMQ: The second step in sending order data to the SAP/R3 application involves BTS receiving the order data from the MSMQ queue into which it was placed at the end of the first step. You must configure a BTS MSMQ receive function to monitor the MSMQ queue to which the XML order was sent in the previous step. This receive function forwards the XML message to the configured BTS channel for transformation.
    3. The third step in sending order data to the SAP/R3 application involves BTS transforming the order data from Commerce Server Order XML v1.0 format into ORDERS01 IDOC format. A BTS channel must be configured to perform this transformation. After the transformation is complete, the BTS channel sends the resulting ORDERS01 IDOC message to the corresponding BTS messaging port. The BTS messaging port is configured to send the transformed message to an MSMQ queue called the 840 Queue. Once the message is placed in this queue, the BizTalk Adapter for SAP is responsible for further processing. 
    4. BizTalk Adapter for SAP sends the ORDERS01document to the DCOM Connector (Get more information on DCOM Connector from www.sap.com/bapi), which writes the order to the SAP/R3 application. The DCOM Connector is an SAP software product that provides a mechanism to send data to, and receive data from, an SAP system. When an IDOC message is placed in the 840 Queue, the DOM Connector retrieves the message and sends it to SAP for processing. Although this processing is in the domain of the BizTalk Adapter for SAP, the steps involved are reviewed here as background information:
      • Determine the version of the IDOC schema in use and generate a BizTalk Server document specification.
      • Create a routing key from the contents of the Control Record of the IDOC schema.
      • Request a SAP Destination from the Manager Data Store given the constructed routing key.
      • Submit the IDOC message to the SAP System using the DCOM Connector 4.6D Submit functionality.

    Order Status Update Integration

    Order status update integration can be achieved by providing a mechanism for sending information about updates made within the SAP/R3 application back to the Commerce Server order system.

    The following sequence of steps describes such a mechanism:

    1. BizTalk Adapter for SAP processing:
      After a user has updated a purchase order using the SAP client, and the IDOC has been submitted to the appropriate tRFC port, the BizTalk Adapter for SAP uses the DCOM connector to send the resulting information to the 840 Queue, packaged as an ORDERS01 IDOC message. The 840 Queue is an MSMQ queue into which the BizTalk Adapter for SAP places IDOC messages so that they can be retrieved and processed by interested parties. This process is within the domain of the BizTalk Adapter for SAP, and is used by this solution to achieve the order update integration.
    2. Receiving the ORDERS01 IDOC message from MSMQ:
      The second step in updating order status from the SAP/R3 application involves BTS receiving ORDERS01 IDOC message from the MSMQ queue (840 Queue) into which it was placed at the end of the first step. You must configure a BTS MSMQ receive function to monitor the 840 Queue into which the XML order status message was placed. This receive function must be configured to forward the XML message to the configured BTS channel for transformation.
    3. Transforming the order update from IDOC format:
      Using a BTS MSMQ receive function, the document is retrieved and passed to a BTS transformation channel. The BTS channel transforms the ORDERS01 IDOC message into Commerce Server Order XML v1.0 format, and then forwards it to the corresponding BTS messaging port. You must configure a BTS channel to perform this transformation.The following BizTalk Server (BTS) map demonstrates in the prototyping of this solution for transforming an SAP ORDERS01 IDOC message into an XML document in Commerce Server Order XML v1.0 format. It allows a change to an order in the SAP/R3 application to be reflected in the Commerce Server orders database.

      This map used in the prototype only maps the order ID, demonstrating how the order in the SAP/R3 application can be synchronized with the order in the Commerce Server orders database. The mapping of other fields is specific to a particular implementation, and was not done for the prototype.

    < xsl:stylesheet xmlns:xsl='http://www.w3.org/1999/XSL/Transform' 
    xmlns:msxsl='urn:schemas-microsoft-com:xslt' xmlns:var='urn:var' 
    xmlns:user='urn:user' exclude-result-prefixes='msxsl var user' 
    version='1.0'>
    < xsl:output method='xml' omit-xml-declaration='yes' />
    < xsl:template match='/'>
    < xsl:apply-templates select='ORDERS01'/>
    < /xsl:template>
    < xsl:template match='ORDERS01'>
    < orderform>
    
    'Connection from source node "BELNR" to destination node "OrderID"
    
    < xsl:if test='E2EDK02/@BELNR'>
    < xsl:attribute name='OrderID'>
    ; < xsl:value-of select='E2EDK02/@BELNR'/>
    < /xsl:attribute>
    < /xsl:if>
    < /orderform>
    < /xsl:template>
    < /xsl:stylesheet>

    The BTS message port posts the transformed order update document to the configured ASP page for further processing. The configured ASP page retrieves the message posted to it and uses the Commerce Server OrderGroupManager and OrderGroup objects to update the order status information in the Commerce Server orders database.

  • Updating the Commerce Server order system:
    The fourth step in updating order status from the SAP/R3 application involves updating the Commerce Server order system to reflect the change in status. This is accomplished by adding the page _OrderStatusUpdate.asp to the AFS Solution Site and configuring the BTS messaging port to post the transformed XML document to that page. The update is performed using the Commerce Server OrderGroupManager and OrderGroup objects.
  •  

    The routine ProcessOrderStatus is the primary routine in the page. It uses the DOM and XPath to extract enough information to find the appropriate order using the OrderGroupManager object. Once the correct order is located, it is loaded into an OrderGroup object so that any of the entries in the OrderGroup object can be updated as needed.

    The following code implements page _OrderStatusUpdate.asp:

    < %@ Language="VBScript" %>
    
    < % 
    const TEMPORARY_FOLDER = 2
    
    call Main()
    
    Sub Main()
    call ProcessOrderStatus( ParseRequestForm() )
    End Sub
    
    Sub ProcessOrderStatus(sDocument)
    
    Dim oOrderGroupMgr 
    Dim oOrderGroup 
    Dim rs
    Dim sPONum
    Dim oAttr 
    Dim vResult
    Dim vTracking 
    Dim oXML
    Dim dictConfig
    Dim oElement
    
    Set oOrderGroupMgr = Server.CreateObject("CS_Req.OrderGroupManager")
    Set oOrderGroup = Server.CreateObject("CS_Req.OrderGroup")
    
    Set oXML = Server.CreateObject("MSXML.DOMDocument")
    oXML.async = False
    
    If oXML.loadXML (sDocument) Then
    
    ' Get the orderform element.
    Set oElement = oXML.selectSingleNode("/orderform")
    
    ' Get the poNum.
    sPONum = oElement.getAttribute("OrderID")
    
    Set dictConfig = Application("MSCSAppConfig").GetOptionsDictionary("")
    
    ' Use ordergroupmgr to find the order by OrderID.
    oOrderGroupMgr.Initialize (dictConfig.s_CatalogConnectionString)
    Set rs = oOrderGroupMgr.Find(Array("order_requisition_number='" sPONum & "'"), _
        Array(""), Array(""))
    
    If rs.EOF And rs.BOF Then
    'Create a new one. - Not implemented in this version.
    Else
    ' Edit the current one.
    oOrderGroup.Initialize dictConfig.s_CatalogConnectionString, rs("User_ID")
    
    ' Load the found order.
    oOrderGroup.LoadOrder rs("ordergroup_id")
    
    ' For the purposes of prototype, we only update the status
    oOrderGroup.Value.order_status_code = 2 ' 2 = Saved order
    
    ' Save it
    vResult = oOrderGroup.SaveAsOrder(vTracking)
    
    End If
    Else
    WriteError "Unable to load received XML into DOM."
    End If
    
    End Sub Function ParseRequestForm()
    
    Dim PostedDocument
    Dim ContentType
    Dim CharSet
    Dim EntityBody
    Dim Stream
    Dim StartPos
    Dim EndPos
    
    ContentType = Request.ServerVariables( "CONTENT_TYPE" )
    
    ' Determine request entity body character set (default to us-ascii).
    CharSet = "us-ascii"
    StartPos = InStr( 1, ContentType, "CharSet=""", 1)
    If (StartPos > 0 ) then
    StartPos = StartPos + Len("CharSet=""")
    EndPos = InStr( StartPos, ContentType, """",1 )
    CharSet = Mid (ContentType, StartPos, EndPos - StartPos )
    End If
    
    ' Check for multipart MIME message.
    PostedDocument = ""
    
    if ( ContentType = "" or Request.TotalBytes = 0) then
    
    ' Content-Type is required as well as an entity body.
    Response.Status = "406 Not Acceptable"
    Response.Write "Content-type or Entity body is missing" & VbCrlf
    Response.Write "Message headers follow below:" & VbCrlf
    Response.Write Request.ServerVariables("ALL_RAW") & VbCrlf
    Response.End
    Else
    If ( InStr( 1,ContentType,"multipart/" ) >

    .NET Support

    This Multi-Tier Application Environment can be implemented successfully with the help of Web portal which utilizes the Microsoft .NET Enterprise Server model. The Microsoft BizTalk Server Toolkit for Microsoft .NET provides the ability to leverage the power of XML Web services and Visual Studio .NET to build dynamic, transaction-based, fault-tolerant systems with full access to existing applications.

    Summary

    Microsoft BizTalk Server can help organizations quickly establish and manage Internet relationships with other organizations. It makes it possible for them to automate document interchange with any other organization, regardless of the conversion requirements and data formats used. This provides a cost-effective approach for integrating business processes across large Enterprises Resource Planning Systems. Integration process designed to facilitate collaborative e-commerce business processes. The process includes a document interchange engine, a business process execution engine, and a set of business document and server management tools. In addition, a business document editor and mapper tools are provided for managing trading partner relationships, administering server clusters, and tracking transactions.

    References

    SAP Weekend : Part 1 – ERPConnect Services for SharePoint 2010 (ECS)

    This weekend was spent completing my new “List Search Web Part” and also 2 Free Web Parts that is included in the “List Web Part Pack” – More about this in my future blog.

    erp256-bc0e84ce

    In between the “SAP Bug” bit me again and I decided to write a  blog post series on the various adapters I have used in SharePoint and SAP Integration Projects and to give you a basic “run down” of how and with which technologies each adapter connects the 2 systems with.

    ERPConnect was 1st on the list. ….

     

    Yes, I can hear the grumblings of those of us who have worked with SAP and SharePoint  Integration and the ERPConnect adapter before 🙂

    For starters, you need to have a SAP Developer Key to be allowed to use the SAP web service wizard, and also have the required SAP authorizations. In other cases it may not be allowed by IT operations to make any modification to the SAP environment, even if it’s limited to the full-automatic generation and activation of the BAPI webservice(s).

    Another reason from a system architecture viewpoint, is that the single BAPI and/or RFC calls may be of too low granularity. You actually want to perform a ‘business transaction’, consisting of multiple method invocations which must be treated as a Logical Unit of Work (LUW). SAP has introduced the concept of SAP Enterprise Services for this, and has delivered a first set of them. This is by far not complete yet, and SAP will augment it the coming years.

    SharePoint 2010 provides developer with the capability to integrate external data sources like SAP business data via the Business Connectivity Services (BCS) into the SharePoint system. The concept of BCS is based on entities and associated stereotyped operations. This perfectly suits for flat and simple structured data sets like SAP tables.

    Another and way more flexible option to use SAP data in SharePoint are the ERPConnect Services for SharePoint 2010 (ECS). The product suite consists of three product components: ERPConnect Services runtime, the BCS Connector application and the Xtract PPS for PerformancePoint Services.

    The runtime is providing a Service Application that integrates itself with the new service architecture of SharePoint 2010. The runtime offers a secure middle-tier layer to integrate different kind of SAP objects in your SharePoint applications, like tables and function modules.

    The BCS Connector application allows developers to create BDC models for the BCS Services, without programming knowledge. You may export the BDC models created by the BCS Connector to Visual Studio 2010 for further customizing.

     

    The Xtract PPS component offers a SAP data source provider for the PerformancePoint Services of SharePoint 2010. T

    his article gives you an overview of the ERPConnect Services runtime and shows how you can create and incorporate business data from SAP in different SharePoint application types, like Web Parts, Application Pages or Silverlight modules.

    This article does not introduce the other components.

    Background

    This section will give you a short explanation and background of SAP objects that can be used in ERPConnect Services. The most important objects are SAP tables and function modules. A function module is basically similar to a normal procedure in conventional programming languages. Function modules are written in ABAP, the SAP programming language, and are accessible from any other programs within a SAP system. They accept import and export parameters as well as other kind of special parameters.

     

    In addition, BAPIs (Business-API) are special function modules that are organized within the SAP Business Object Repository. In order to use function modules with the runtime they must be marked as Remote (RFC). SAP table data can also be retrieved. Tables in SAP are basically relational database tables. Others SAP objects like BW Cubes or SAP Queries can be accessed via the XtractQL query language (see below).

     

    Installation & Configuration

    Installing ERPConnect Services on a SharePoint 2010 server is done by an installer and is straight forward. The SharePoint Administration Service must run on the local server (see Windows Services).

    For more information see product documentation. After the installation has been successfully processed navigate to the Service Applications screen within the central administration (CA) of SharePoint:

     

    Before creating your first Service Application a Secure Store must be created, where ERPConnect Services will save SAP user credentials. In the settings page for the “Secure Store Service” create a new Target Application and name the application “ERPConnect Services”. Click on the button “Next” to define the store fields as follows:

     

    Finish the creation process by clicking on “Next” and define application administrators. Then, mark the application, click “Set Credentials” and enter the SAP user credentials:

     

    Let’s go on and create a new ERPConnect Service Application!

    Click the “ERPConnect Service Application” link in the “New” menu of the Service Applications page (see also first screenshot above). This opens a dialog to define the name of the service application, the SAP connection data and the IIS application pool:

     

    Click “Create” after entering all data and you will see the following entries in the Service Applications screen:

     

    That’s it! You are now done setting up your first ERPConnect Service Application.

    Development

    The runtime functionality covers different programming demands such as generically retrievable interface functions. The service applications are managed by the Central Administration of SharePoint. The following service and function areas are provided:

    1. Executing and retrieving data directly from SAP tables
    2. Executing SAP function modules / BAPIs
    3. Executing XtractQL query statements

    The next sections shows how to use these service and function areas and access different SAP objects from within your custom SharePoint applications using the ERPConnect Services. The runtime can be used in applications within the SharePoint context like Web Parts or Application Pages.

    In order to do so, you need to reference the assembly i in the project. Before you can access data from the SAP system you must create an instance of the ERPConnectServiceClient class. This is the gate to all SAP objects and the generic API of the runtime in overall. In the SharePoint context there are two options to create a client object instance:

    // Option #1
    ERPConnectServiceClient client = new ERPConnectServiceClient();
    
    // Option #2
    ERPConnectServiceApplicationProxy proxy = SPServiceContext.Current.GetDefaultProxy(
       typeof(ERPConnectServiceApplicationProxy)) as ERPConnectServiceApplicationProxy;
    ERPConnectServiceClient client = proxy.GetClient();

    For more details on using ECS in Silverlight or desktop applications see the specific sections below.

    Querying Tables

    Querying and retrieving table data is a common task for developers. The runtime allows retrieving data directly from SAP tables. The ERPConnectServiceClient class provides a method called ExecuteTableQuery with two overrides which query SAP tables in a simple way.

    The method also supports a way to pass miscellaneous parameters like row count and skip, custom function, where clause definition and a returning field list. These parameters can be defined by using the ExecuteTableQuerySettings class instance.

    DataTable dt = client.ExecuteTableQuery("T001");
    
    …
        
    ExecuteTableQuerySettings settings = new ExecuteTableQuerySettings {
      RowCount = 100,
      WhereClause = "ORT01 = 'Paris' AND LAND1 = 'FR'",
      Fields = new ERPCollection<string> { "BUKRS", "BUTXT", "ORT01", "LAND1" }
    };
    
    DataTable dt = client.ExecuteTableQuery("T001", settings);
    
    …
    
    // Sample 2
    DataTable dt = client.ExecuteTableQuery("MAKT",
                new ExecuteTableQuerySettings {
                    RowCount = 10,
                    WhereClause = "MATNR = '60-100C'",
                    OrderClause = "SPRAS DESC"
                });

    The first query reads all records from the SAP table T001 where the fields ORT01 equals Paris and LAND1 equals FR (France). The query returns the top 100 records and the result set contains only the fields BUKRS, BUTXT, ORT01 and LAND1.

    The second query returns the top ten records of the SAP table MAKT, where the field MATNR equals the material number 60-100C. The result set is ordered by the field SPRAS.

    Executing Function Modules

    In addition to query SAP tables the runtime API executes SAP function modules (BAPIs). Function modules must be marked as remote-enabled modules (RFC) within SAP.

    The ERPConnectServiceClient class provides a method called CreateFunction to create a structure of metadata for the function module. The method returns an instance of the data structure ERPFunction. This object instance contains all parameters types (import, export, changing and tables) that can be used with function modules.

    In the sample below we call the function SD_RFC_CUSTOMER_GET and pass a name pattern (T*) for the export parameter with name NAME1. Then we call the Execute method on the ERPFunction instance. Once the method has been executed the data structure is updated. The function returns all customers in the table CUSTOMER_T.

    ERPFunction function = client.CreateFunction("SD_RFC_CUSTOMER_GET");
    function.Exports["NAME1"].ParamValue = "T*";
    function.Execute();
    
    foreach(ERPStructure row in function.Tables["CUSTOMER_T"])
      Console.WriteLine(row["NAME1"] + ", " + row["ORT01"]);

    The following code shows an additional sample. Before we can execute this function module we need to define a table with HR data as input parameter.

    The parameters you need and what values the function module is returning dependents on the implementation of the function module

    ERPFunction function = client.CreateFunction("BAPI_CATIMESHEETMGR_INSERT");
    function.Exports["PROFILE"].ParamValue = "TEST";
    function.Exports["TESTRUN"].ParamValue = "X";
    
    ERPTable records = function.Tables["CATSRECORDS_IN"];
    ERPStructure r1 = records.AddRow();
    r1["EMPLOYEENUMBER"] = "100096";
    r1["WORKDATE"] = "20110704";
    r1["ABS_ATT_TYPE"] = "0001";
    r1["CATSHOURS"] = (decimal)8.0;
    r1["UNIT"] = "H";
    
    function.Execute();
    
    ERPTable ret = function.Tables["RETURN"]; 
    
    foreach(var i in ret)
      Console.WriteLine("{0} - {1}", i["TYPE"], i["MESSAGE"]);

    Executing XtractQL Query Statements

    The ECS runtime is offering a SAP query language called XtractQL. The XtractQL query language, also known as XQL, consists of ABAP and SQL syntax elements. XtractQL allows querying SAP tables, BW-Cubes, SAP Queries and executing function modules.

    It’s possible to return metadata for the objects and MDX statements can also be executed with XQL. All XQL queries are returning a data table object as result set. In case of the execution of function modules the caller must define the returning table (see sample below – INTO @RETVAL). XQL is very useful in situations where you need to handle dynamic statements. The following list shows a

    SELECT TOP 5 * FROM T001W WHERE FABKL = 'US'

    This query selects the top 5 records of the SAP table T001W where the field FABKL equals the value US.

    SELECT * FROM MARA WITH-OPTIONS(CUSTOMFUNCTIONNAME = 'Z_XTRACT_IS_TABLE')

     

    SELECT MAKTX AS [ShortDesc], MANDT, SPRAS AS Language FROM MAKT

    This query selects all records of the SAP table MAKT. The result set will contains three fields named ShortDesc, MANDT and Language.

     

    EXECUTE FUNCTION 'SD_RFC_CUSTOMER_GET'
       EXPORTS KUNNR='0000003340'
       TABLES CUSTOMER_T INTO @RETVAL;

    This query executes the SAP function module SD_RFC_CUSTOMER_GET and returns as result the table CUSTOMER_T (defined as @RETVAL).

    DESCRIBE FUNCTION 'SD_RFC_CUSTOMER_GET' GET EXPORTS

    This query returns metadata about the export parameters of the

    SELECT TOP 30 LIPS-LFIMG, LIPS-MATNR, TEXT_LIKP_KUNNR AS CustomerID
       FROM QUERY 'S|ZTHEO02|ZLIKP'
       WHERE SP$00002 BT '0080011000'AND '0080011999'

    This statement executes the SAP Query “S|ZTHEO02|ZLIKP” (name includes the workspace, user group and the query name). As you can see XtractQL extends the SQL syntax with ABAP or SAP specific syntax elements. This way you can define fields using the LIPS-MATNR format and SAP-like where clauses like “SP$00002 BT ‘0080011000’AND ‘0080011999’”.

    ERPConnect Services provides a little helper tool, the XtractQL Explorer (see screenshot below), to learn more about the query language and to test XQL queries. You can use this tool independent of SharePoint, but you need access to a SAP system.

    To find out more about all XtractQL language syntax see the product manual.

    Silverlight And Desktop Applications

    So far all samples are using the assembly ERPConnectServices.Server.Common.dll as project reference and all code snippets shown run within the SharePoint context, e.g. Web Part.

    ERPConnect Services also provides client libraries for Silverlight and desktop applications:

    ERPConnectServices.Client.dll for Desktop applications
    ERPConnectServices.Client.Silverlight.dll for Silverlight applications

    You need to add the references depending what project you are implementing.

    In Silverlight the implementation and design pattern is a little bit more complicated, since all web services will be called in asynchronously. It’s also not possible to use the DataTable class. It’s just not implemented for Silverlight.

    The runtime provides a similar class called ERPDataTable, which is used in this cases by the API. The ERPConnectServiceClient class for Silverlight provides the method ExecuteTableQueryAsync and an event called ExecuteTableQueryCompleted as callback delegate.

    public event EventHandler<ExecuteTableQueryCompletedEventArgs> ExecuteTableQueryCompleted;
    
    public void ExecuteTableQueryAsync(string tableName)
    public void ExecuteTableQueryAsync(string tableName, ExecuteTableQuerySettings settings)

    The following code sample shows a simple query of the SAP table T001 within a Silverlight client.

    First of all, an instance of the ERPConnectServiceClient is created using the URI of the ERPConnectService.svc, then a delegate is defined to handle the complete callback. Next, the query is executed, defined with a RowCount equal 10 to only return the top 10 records in the result set.

    Once the result is returned the data set will be attached to a DataGrid control (see screenshot below) within the callback method.

     

    void OnGetTableDataButtonClick(object sender, RoutedEventArgs e)
    {
      ERPConnectServiceClient client = new ERPConnectServiceClient(
        new Uri("http://<SERVERNAME>/_vti_bin/ERPConnectService.svc"));
    
      client.ExecuteTableQueryCompleted += OnExecuteTableQueryCompleted;
      client.ExecuteTableQueryAsync("T001", 
        new ExecuteTableQuerySettings { RowCount = 150 });
    }
    
    void OnExecuteTableQueryCompleted(object sender, ExecuteTableQueryCompletedEventArgs e)
    {
      if(e.Error != null)
        MessageBox.Show(e.Error.Message);
      else
      {
        e.Table.View.GroupDescriptions.Add(new PropertyGroupDescription("ORT01"));
        TableGrid.ItemsSource = e.Table.View;
      }
    }

    The screenshot below shows the XAML of the Silverlight page:

    The final result can be seen below:

    ECS Designer

    ERPConnect Services includes a Visual Studio 2010 plugin, the ECS Designer, that allows developer to visually design SAP interfaces. It’s working similar to the LINQ to SAP Designer I have written about a while ago, see article at CodeProject: LINQ to SAP.

    The ECS Designer is not automatically installed once you install the product. You need to call the installation program manually. The setup adds a new project item type to Visual Studio 2010 with the file extension .ecs and is linking it with the designer. The needed references are added automatically after adding an ECS project item.

    The designer generates source code to integrate with the ERPConnect Services runtime after the project item is saved. The generated context class contains methods and sub-classes that represent the defined SAP objects (see screenshots below).

     

    Before you access the SAP system for the first time you will be asked to enter the connection data. You may also load the connection data from SharePoint system. The designer GUI is shown in the screenshots below:

     

    The screenshot above for instance shows the tables dialog. After clicking the Add (+) button in the main designer screen and searching a SAP table in the search dialog, the designer opens the tables dialog.

     

    In this dialog you can change the name of the generated class, the class modifier and all needed properties (fields) the final class should contain.

     

    To preview your selection press the Preview button. The next screenshot shows the automatically generated classes in the file named EC1.Designer.cs:

     

    Using the generated code is simple. The project type we are using for this sample is a standard console application, therefore the designer is referencing the ERPConnectServices.Client.dll for desktop applications.

    Since we are not within the SharePoint context, we have to define the URI of the SharePoint system by passing this value into the constructor of the ERPConnectServicesContext class.

    The designer has generated class MAKT and an access property MAKTList for the context class of the table MAKT. The type of this property MAKTList is ERPTableQuery<MAKT>, which is a LINQ queryable data type.

     

    This means you can use LINQ statements to define the underlying query. Internally, the ERPTableQuery<T> type will translate your LINQ query into call of ExecuteTableQuery.

     

    That’s it!

     

    Advanced Techniques

    There are situations when you have to use the exact same SAP connection while calling a series of function modules in order to receive the correct result. Let’s take the following code:

    ERPConnectServiceClient client = new ERPConnectServiceClient();
    
    using(client.BeginConnectionScope())
    {
      ERPFunction f = client.CreateFunction("BAPI_GOODSMVT_CREATE");
    
      ERPStructure s = f.Exports["GOODSMVT_HEADER"].ToStructure();
      s["PSTNG_DATE"] = "20110609"; // Posting Date in the Document
      s["PR_UNAME"] = "BAEURLE";    // UserName
      s["HEADER_TXT"] = "XXX";      // HeaderText
      s["DOC_DATE"] = "20110609";   // Document Date in Document
    
      f.Exports["GOODSMVT_CODE"].ToStructure()["GM_CODE"] = "01";
    
      ERPStructure r = f.Tables["GOODSMVT_ITEM"].AddRow();
      r["PLANT"] = "1000";          // Plant
      r["PO_NUMBER"] = "4500017210"; // Purchase Order Number
      r["PO_ITEM"] = "010";      // Item Number of Purchasing Document 
      r["ENTRY_QNT"] = 1;          // Quantity in Unit of Entry
      r["MOVE_TYPE"] = "101";        // Movement Type
      r["MVT_IND"] = "B";            // Movement Indicator
      r["STGE_LOC"] = "0001";        // Storage Location
    
      f.Execute();
    
      string matDocument = f.Imports["MATERIALDOCUMENT"].ParamValue as string;
      string matDocumentYear = f.Imports["MATDOCUMENTYEAR"].ParamValue as string;
    
      ERPTable ret = f.Tables["RETURN"]; //.ToADOTable();
    
      foreach(var i in ret)
        Console.WriteLine("{0} - {1}", i["TYPE"], i["MESSAGE"]);
    
      ERPFunction fCommit = client.CreateFunction("BAPI_TRANSACTION_COMMIT");
      fCommit.Exports["WAIT"].ParamValue = "X";
      fCommit.Execute();
    }

    In this sample we create a goods receipt for a goods movement with BAPI_GOODSMVT_CREATE. The final call to BAPI_TRANSACTION_COMMIT will only work, if the system under the hood is using the same connection object.

     

    The runtime is not providing direct access to the underlying SAP connection, but the library offers a mechanism called connection scoping. You may create a new connection scope with the client library and telling ECS to use the same SAP connection until you close the connection scope. Within the connection scope every library call will use the same SAP connection.

    In order to create a new connection scope you need to call the BeginConnectionScope method of the class ERPConnectServiceClient.

    The method returns an IDisposable object, which can be used in conjunction with the using statement of C# to end the connection scope.

    Alternatively, you may call the EndConnectionScope method. It’s also possible to use function modules with nested structures as parameters.

    This is a special construct of SAP. The goods receipt sample above is using a nested structure for the export parameter GOODSMVT_CODE. For more detailed information about nested structures and tables see the product documentation.

    SharePoint 2013 – Creating a Word document with OOXML

    This solution is based on the SharePoint-hosted app template provided by Visual Studio 2012. The solution enumerates through each document library in the host website, and adds the library to a drop-down list.

    2008040211105590dad[1]

     

    When the user selects a library and clicks a tile, the app creates a sample Word 2013 document by using OOXML in the selected library.

    Prerequisites

    This sample requires the following:

    • Visual Studio 2012
    • Office Developer Tools for Visual Studio 2012
    • Either of the following:
      • SharePoint Server 2013 configured to host apps, and with a Developer Site collection already created; or,
      • Access to an Office 365 Developer Site configured to host apps.

    Key components of the sample

    The sample app contains the following:

    • The Default.aspx webpage, which is used to enumerate through each document library in the host website, and render tiles for each MP4 video in the app.
    • The Point8020Metro.css style sheet (in the CSS folder) which contains some simple styles for rendering tiles.
    • The AppManifest.xml file, which has been edited to specify that the app requests Full Control permissions for the hosting web.
    • References to the DocumentFormat.OpenXml assembly provided by the OpenXML SDK 2.5.

    All other files are automatically provided by the Visual Studio project template for apps for SharePoint, and they have not been modified in the development of this sample.

    Configure the sample

    Follow these steps to configure the sample.

    1. Open the SP_Autohosted_OOXML_cs.sln file using Visual Studio 2012.
    2. In the Properties window, add the full URL to your SharePoint Server 2013 Developer Site collection or Office 365 Developer Site to the Site URL property.

    No other configuration is required.

    Build the sample

    To build the sample, press CTRL+SHIFT+B.

    Run and test the sample

    To run and test the sample, do the following:

    1. Press F5 to run the app.
    2. Sign in to your SharePoint Server 2013 Developer Site collection or Office 365 Developer Site if you are prompted to do so by the browser.
    3. Trust the app when you are prompted to do so.

    The following images illustrate the app. In Figure 1 the app has been trusted and libraries added to the drop-down list.

    Figure 1. View of the app with drop-down list

    Figure 1

    In Figure 2, the user has clicked the orange tile. The document is created and the red tile provides a link to the appropriate library (Figure 3), which the user reaches by clicking on the red tile.

    Figure 2. Open XML document creator

    Figure 2

    Figure 3. Document library

    Figure 3

    Troubleshooting

    Ensure that you have SharePoint Server 2013 configured to host apps (with a Developer Site Collection already created), or that you have signed up for an Office 365 Developer Site configured to host apps.

    Change log

    First release: January 30, 2013.

    Related content

    New Web Part released – List Search Web Part now available!!

    The List Search Web Part reads the entries from a Sharepoint List or Library (located anywhere in the site collection) and displays the selected user fields in a grid with an optional interactive search filter.

    It can be used for WSS3.0, MOSS 2007, Sharepoint 2010 and Sharepoint 2013.

     Imagea

    The following parameters can be configured:

    • Sharepoint Site
    • List Columns to be displayed
    • Filtering, Grouping, Searching, Paging and Sorting of rows
    • AZ Index
    • optional Header text

    Installation Instructions:

    1. download the List Search Web Part Installation Instructions
    2. either install the web part manually or deploy the feature to your server/farm as described in the instructions. 
    3. Security Note:
      if you get the following error message: “Only an administrator may enumerate through all user profiles“, you will need to grant the application pool account(s) for the web application(s) „Manage User Profiles” permissions within the User Profile Sevice (SSP in case of MOSS2007).  
      This ensures that the application pool is able to retrieve the list of user profiles. 
      To assign this permission, access your active “User Profile Service” (SP 2010 Server ) or the “Shared Services Provider” (MOSS2007) via Central Admin. 
      From the „User Profiles and My Sites” group, click “Personalization services permissions”.  
      Add the „Manage User Profiles” permission to  your application pool account(s).
    4. Configure the following Web Part properties in the Web Part Editor “Miscellaneous” pane section as needed:
      • Site Name: Enter the name of the site that contains the List or Library:
        – leave this field empty if the List is in the current site (eg. the Web Part is placed in the same site)
        – Enter a “/” character if the List is contained in the top site
        – Enter a path if the List in in a subsite of the current site (eg. in the form of “current site/subsite”)
      • List Name: Enter the name of the desired Sharepoint List or Library
        Example: Project Documents
      • View Name: Optionally enter the desired List View of the list specified above. A List View allows you to specify specific data filtering and sorting. 
        Leave this field empty if you want to use the List default view.
      • Field Template: Enter the List columns to be displayed (separated by semicolons).
        Pictures can be attached (via File Upload) to the Sharepoint List items and displayed using the symbolic “Picture” column name.
        If you want to allow users to edit their own entries, please add the symbolic “Username” column name to the Field Template. An “Edit” symbol will then displayed to allow the user to navigate to the corresponding Edit Form:Example:
        Type;Name;Title;Modified;Modified By;Created By

        Friendly Header Names:
        If you would like to display a “friendly header name” instead of the default property name please append it to the User property, separated by the “|” pipe symbol.

        Example:
        Picture;LastName|Last Name;FirstName;Department;Email|Email Address

        Hiding individual columns:
        You can hide a column by prefixing it with a “!” character. 
        The following example hides the “Department” column: 
        LastName;FirstName;!Department;WorkEmail

        Suppress Column wrapping:
        You can suppress the wrapping of text inside a column by prefixing it with a “^” character.
        LastName;FirstName;Department;^AboutMe

        Showing the E-Mail address as plain text:
        You can opt to display the plain e-mail address (instead of the envelope icon) by appending “/plain” to the WorkEmail column:
        LastName;WorkEmail/plain;Department

      • Group By: enter an optional User property to group the rows.
      • Sort By: enter the List column(s) to define the default sort order. You can add multiple properties separated by commas. Append “/desc” to sort the column descending.
        Examples:
        Department
        Department,LastName
        Lastname
        /desc

        The columns headings can be clicked by the users to manually define the sort order.
      • AZ Index Column: enter an optional List column to display the AZ filter in the list header. 
        If an “!” character is appended to the property name, the “A” index will be forced when visiting the page.
        Example: LastName! 

         
         Image  
      • Search Box: enter one or more List columns (separated by semicolons) to allow for interactive searching.Example: LastName;FirstName

        If you want to display a search filter as a dropdown combo, please enter it with a leading “@” character:
        LastName;FirstName;Department;@Office

        Friendly Search Box Labels:
        If you would like to display a “friendly label” instead of the default property name please append it to the User property, separated by the “|” pipe symbol.
        Example:
        WorkPhone|Office Phone;Office|Office Nbr

         

      • Align Search Filters vertically: allows you to align the seach input boxes vertically to save horizontal space:
      • Rows per page: the Staff Directory web part supports paging and lets you specify the desired number of rows per page. 
      • Image Height: specify the image height in pixels if you include the “Picture” property. 
        Enter “0” if you want to use the default picture size.
      • Header Text: enter an optional header text. Please note that you can embed HTML tags if needed. You can additionally specify the text to be displayed if the “Show all entries” option is unchecked and the users has not performed a search yet by appending a “|” character followed by the text.
        Example:
        This is the regular header text|This text is only shown if the user has not yet performed a search
      • Detail View Page: enter an optional column name prefixed by “detailview=” to link a column to the item detail view page. Append the “/popup” option if you want to open the detail page in a Sharepoint 2010/2013 dialog popup window.
        Examples:
        detailview=LastName
        detailview/popup=Title
      • Alternating Row Color: enter the optional color of the alternating row background (leave blank to use default).
        Enter either the HTML color names (as eg. “red” etc.) or use hexadecimal RRGGBB coding (as eg. “#CCFFCC”). Enter the values without the double quotes.
        You can also change the default background color of the non-alternating rows by appending a second color value separated by a semicolon.
        Example: #ffffcc;#ffff99 

        The default Header style can be changed by adding the “AESD_Headerstyle” appSettings variable to the web.config “appSettings” section:

        <appSettings>
        <
        add key=AESD_Headerstyle value=background:green;font-size:10pt;color:white
         />
        <
        appSettings
        >

         

      • Show Column Headers: either show or suppress the List column header row.
      • Header Row CSS Style: enter the optionall header row CSS style(s) as needed.
        Example:
        color:blue;white-space:nowrap
      • Show Groups collapsed: either show the groups (if you specify a column in the “Group By” setting) collapsed or expanded when entering the page.
      • Enforce Security: hides the web part if user has no access to the site or the list. This avoids a login prompt if the user has not at least “View” permission on the list or site containing the list.
      • Show all entries: either show all directory entries or none when first visiting the page. 
        You can append a specific text to the “Header Text” field (see above) which is only displayed if this option is unchecked and no search has yet been performed by the user.
      • Open Links in new window: either open the links in a new window or in the same browser window.
      • Link Documents to Office365: open the Word, Excel and Powerpoint documents in the Office365 web viewer.
      • Show ‘Add New Item’ Button: either show or suppress the “Add new item button” to let users add new items to the list (this option is security-trimmed).
      • Export to CSV: Show/hide the “Export” button for Excel CSV File Export
      • CSV Separator: Enter the desired CSV field separator character (Default=Comma). Use a semicolon in countries which use the commas as a decimal separator.
      • Localization: enter the following 4 values (separated by semicolons) in your local language if you need to override the English strings corresponding to the 
        – Search button text, 
        – A..Z menu “View all” option, 
         the text displayed for Hyperlink columns 
        – the optional “Group By” name (if grouping is enabled)Default:
        Search;View all;Visit

      • License Key: enter your Product License Key (as supplied after purchase of the “Staff Directory Web Part” license key).
        Leave this field empty if you are using the free 30 day evaluation version.

     Contact me now at tomas.floyd@outlook.com for the List Search Web Part and other Free & Paid Web Parts and Apps for SharePoint 2010, 2013, Azure, Office 365, SharePoint Online

    Visio for Developers in Office 365

    In this post, I’ll introduce some of the new features of interest to developers in Visio 2013. Among these features are:

    • New file format
    • Robust updates to themes
    • The change shape feature (that allows you to replace one shape with another while Maintaining shape text)
    • New shape effects
    • Improvements to commenting
    • Coauthoring on SharePoint Server 2013
    • Customizable image clipping
    • Relative geometry
    • Support for Business Connectivity Services (BCS) data
    • Updates to Visio Services in Microsoft SharePoint Server 2013
    • Duplicate page feature

    At the end of the post, I provide you with some additional resources for both Visio and general Office development.

     

    New file format

    Visio 2013 introduces a new file format, based on the Open Packaging Conventions (OPC) standard (ISO 29500, Part 2) and the XML elements from the previous Visio XML file format (.vdx). It is a zipped, XML-based file format similar to the file formats used in other applications.

    Because the new file format is supported by both Visio 2013 and Visio Services in Microsoft SharePoint Server 2013, you can save a Visio drawing directly to a SharePoint Server library without having to first publish the file as a Visio Web Drawing (.vdw). Even so, Visio Services can still read and display Visio Web Drawing files.

    The new file format includes the following file types (by extension):

    • .vsdx (Visio drawing)
    • .vsdm (Visio macro-enabled drawing)
    • .vssx (Visio stencil)
    • .vssm (Visio macro-enabled stencil)
    • .vstx (Visio template)
    • .vstm (Visio macro-enabled template)

    By using existing support for reading and writing to the file format package (such as System.IO.Packaging) and for parsing XML (System.Xml.Linq), you can work with the new file formats programmatically.

    Visio 2013 retains the ability to read the old file formats (.vsd, .vss, .vst, .vdx, .vsx, .vtx, .vdw, .vwi). Visio 2013 does not save to the previous Visio XML file format (.vdx). Solutions or tools that consume the previous Visio XML file format (.vdx) files may need to be refactored in order to read the new file format and its schemas.

    Visio Services retains the ability to display the Visio Web Drawing (.vdw) format in the browser. It now also renders the new Visio drawing (.vsdx) and Visio macro-enabled drawing (.vsdm) formats.

    For more information about the new file format, see the article How to: Manipulate the Visio 2013 file format programmatically.

    Themes

    Themes have been redesigned in Visio 2013, making use of a greater variety of effects and styles including the integration of Shape Art effects. Users can now decide on an overarching style by applying a theme, personalize the diagram with theme variants, and highlight individual shapes with Quick Styles. ShapeSheet developers can take advantage of these features with new functions and cells in the ShapeSheet.

    The user interface for applying theme variants is shown in the following figure.

     

     

    You can also manipulate themes at the Page, Shape, and Selection object level. New APIs for working with themes include Page.SetTheme method, Page.SetThemeVariant method, Shape.SetQuickStyle method, and the Selection.SetQuickStyle method.

    For more information about new VBA objects and members in Visio 2013, see the Visio Automation reference. For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

    Change Shape

    Visio 2013 includes a shape replacement API that enables you to swap one or more shapes for another shape contained in a stencil, while retaining some of the local values from the original shape, like the shape text shape, shape data, or shape formatting. Shape developers can update the ShapeSheet settings of their custom shapes to specify the Change Shape behavior for their shapes. Among the new APIs for Change Shape are the Shape.ReplaceShape and Selection.ReplaceShape methods and the ReplaceShapesEvent object.

    The Change Shape feature lets you easily change a shape (in this case, the green rectangle)…

     

    …to another shape, the green diamond.

    For more information about the Change Shape feature, see Eric Schmidt’s blog post, Change shapes in Visio 2013.

    For more information about new VBA objects and members in Visio 2013, see the Visio Automation reference. For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

    Shape effects

    New shape effects such as bevel, 3-D rotation, glow, reflection, and sketching have been added to Visio 2013. The ShapeSheet includes new cells for working with these effects. The following figure shows a shape to which effects have been applied.

    You can also use Office VBA objects such as TextFrame2, GlowFormat, and ReflectionFormat and their members to apply shape effects.

    For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

    Commenting

    Visio 2013 includes a new commenting framework. Comments can now be associated with a particular shape or page. Visio 2013 includes two new objects, Comments and Comment. New APIs for accessing comments programmatically include the Document.Comments, Page.Comments, Shape.Comments, and Page.ShapeComments properties.

    The following images show what comments looked like in Visio 2010 and what they look like in Visio 2013.

     

     

    Visio Services includes JavaScript APIs to read the comments from a page or shape in a diagram.

    Note: You can no longer access comments in the ShapeSheet.

    Coauthoring

    Visio 2013 includes the ability to co-author diagrams stored on SharePoint or OneDrive. Developers have access to the Document.AfterDocumentMerge event which provides information about diagram changes due to coauthoring. Solution developers also have the ability to disable coauthoring to suit their custom needs by using the NoCoauth cell on the Document ShapeSheet.

    Customizable image clipping

    Visio 2013 supports defining a Custom Image Clipping path to crop images to any shape. This extends the capacities of Visio 2010, which supported clipping images in a rectangular way. This functionality is available in the ShapeSheet by using the ClippingPath cell in the Foreign Image Info section.

    Relative geometries

    In previous versions of Visio, shape geometry was defined by formulas that depended on the height or width of the shape. For example, in Visio 2010 the vertices of many built-in Visio shapes were defined by multiplying the height or width of the shape by a constant. These shapes had Geometry sections that included MoveTo or LineTo rows (for example) with formulas like Width1 and Height0.

    Visio 2013 now supports relative geometry in the ShapeSheet. Shape developers can now use relative geometries to specify geometries as simple values or formulas, which multiply by the height or width automatically. You can now express Shape vertices by using constants, for instance—you no longer need to express vertices as multiples of the shape width or height. This makes it easier for you to create shapes that have better performance and smaller file sizes. New rows include the RelMoveTo and RelLineTo rows where the X and Y cell values are automatically multiplied by the width or height of the shape (respectively).

    Support for Business Connectivity Services (BCS) data

    Visio 2013 diagrams can now be connected to external lists on SharePoint Server 2013 servers. An external list is a content source external to SharePoint (for example, a SQL Server table) that has been connected to a SharePoint list by using Microsoft Business Connectivity Services (BCS). Visio Services supports the ability to refresh the Visio diagrams as the data updates.

    For more information about what’s new in Visio Services, see the article Visio Services in SharePoint 2013. For more information about Business Connectivity Services (BCS), see Business Connectivity Services in SharePoint 2013.

    Improvements in Visio Services

    Visio Services in Microsoft SharePoint Server 2013 includes many improvements. As mentioned previously, Visio Services supports the new Visio file format (.vsdx and .vsdm). Visio Services has expanded data refresh and recalculation, including the ability to recalculate formulas across an entire diagram.

    For more information about what’s new in Visio Services, see the article Visio Services in SharePoint 2013.

    Duplicate page

    You can now copy a page and all of its shapes within the same document in Visio 2013. Accordingly, the Page object has a new method, Duplicate, which duplicates the page and returns a new Page object.

    Additional resources

    Brand new 3 LINQ to Office Providers Available now!!

    The SPSamurai.Office.LINQ namespace contains 3 classes –

    OutlookProvider(LINQ to Outlook), OneNoteProvider (LINQ to OneNote) and ExcelProvider(LINQ to Excel).

    The OutlookProvider is a wrapper class which provides IEnumerable collections to data of the COM interface of Outlook ( appointments, contacts, mails, tasks, …).

    The OneNoteProvider provides collections of notebooks, sections and pages by manipulating the XML hierarchy tree of OneNote. And the ExcelProvider loads an Excel worksheet and provides column definition and row collections.

    All collections are IEnumerable so you can query them with LINQ. The full source code is provided.

    Check out my articles where I describe the implementation of these 3 classes and how to use them. These articles also contain a lot of LINQ query examples.

    Class diagrams:

     

     

     

     

     

    Features :
    Set flag with due date from predefined list: Today, Tomorrow, This Week, Next Week or Custom  
    Different options of follow-up visualization using combinations of flag, text and date  
    Support of sorting and filtering features  
    Support of different calendars (Gregorian, Japanese Emperor Era, Korean Tangun Era, Hijri, etc.)  
    Supported Datasheet view  
    Two-way conversion between ArtfulBits Follow-Up and standard Microsoft® SharePoint® Date and Time column  
    Language pack support (desired localization could be added by request)  

     

    Contact me at tomas.floyd@outlook.com for these tools and more SharePoint, Azure and Office 365 Apps, Tools and Web Parts or for specialised custom SharePoint Development

    How To : Reserve Resources on the Calendar in SharePoint 2013 / Online

    I suppose, many of you know about a great calendar feature in SharePoint 2010 called resource reservation. It enables organization of meetings in useful interface that allows to select multiple resources such as meeting rooms, projector and other facilities and required participants, and next the time frame that is free for all participants and facilities in the calendar view.

    You can switch between week and day views.

    Here is a screenshot of the calendar with resource reservation and member scheduling features:

    You can change resources and participants in the form of your meeting, find free time frames in the diagram and check double booking:

    There are two ways to add the resource reservation feature into SharePoint 2010 calendar:

    1. Enable web feature ‘Group Work Lists’, add calendar and go to its settings. Click ‘Title, description and navigation’ link in ‘General settings’ section. Here check ‘Use this calendar to share member’s schedule?’ and ‘Use this calendar for Resource Reservation?’
    2. Create a site based on ‘Group Work Site’ template.

    Here is the detailed instructions: http://office.microsoft.com/en-us/sharepoint-server-help/enable-reservation-of-resources-in-a-calendar-HA101810595.aspx

    SharePoint 2013 on-premise

    After migration to SharePoint 2013 I discovered that these features were excluded from the new platform and saved only as backward compatibility.

    So, you can migrate your application with installed booking calendar from SharePoint 2010 to SharePoint 2013 and you will keep functionality of resource reservation but you cannot activate it on a new SharePoint 2013 application through default interface.

    Microsoft officially explained these restrictions by unpopularity of the resource reservation feature: http://technet.microsoft.com/en-us/library/ff607742(v=office.15).aspx#section1

    First, I found a solution for SharePoint 2013 on-premise. It is possible to display the missing site templates including ‘Group Work Site’. Then you just need to create a site based on this template and you will get the calendar of resources.

    Go to C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\1033\XML, open WEBTEMP.XML file, find an element with ‘Group Work Site’ title attribute and change its Hidden attribute from FALSE to TRUE.

    SharePoint 2013 Online in Office 365

    Perfect, now we can use free SharePoint booking system based on the standard calendar. But what about SharePoint Online in Office 365? We do not have an access to WEBTEMP.XML in its file system.

    After some research I developed a sandbox solution that enables hidden ‘Group Work Lists’ feature and adds calendar with resource reservation and member scheduling features. Please, download it and follow the instructions to install:

    1. Go to the site collection settings.
    2. Open ‘Solutions’ area from ‘Web Designer Galleries’ section.
    3. Upload CalendarWithResources.wsp package and activate it.
    4. Now, navigate into the site where you wish to add the calendar with the enabled resource reservation feature.
    5. Open site settings -> site features.
    6. Activate ‘Calendar With Resources’ feature.

    Great, now you have Group Calendar with an ability to book resources and schedule meetings.

    This solution works for SharePoint 2013 on-premise as well, so you can use it instead of WEBTEMP.XML file modification.

    Free download :

    WSP File – http://1drv.ms/1f7ZSqO

     

    When should I choose to create a mail app versus an add-in for Outlook?

    When should I choose to create a mail app versus an add-in for Outlook?

    Rate This
    PoorPoorFairFairAverageAverageGoodGoodExcellentExcellent

    Some of you may or may not be aware that alongside with the legacy COM-based Office client object models, Office 2013 supports a new apps for Office developer platform. This blog post is intended to help new and existing Office developers understand the main differences between the COM-based object models and the apps for Office platform. In particular, this post focuses on Outlook, suggests why you should consider developing solutions as mail apps, and identifies those exceptional scenarios where add-ins may still be the more appropriate choice.

    Contents:

    An introduction to the apps for Office platform

    Architectural differences between add-in model and apps for Office platform

    Main features available to mail apps

    Major objects for mail apps

    Reasons to create mail apps instead of add-ins for Outlook

    Reasons to choose add-ins

    Conclusion

    Further references

    An introduction to the apps for Office platform

    The apps for Office platform includes a JavaScript API for Office and a schema for apps for Office manifests. You can use this platform to extend web services and content into the context of rich and web clients of Office. An app for Office is a webpage that is developed using common web technologies, hosted inside an Office client application (such as Outlook) on-premises or in the cloud. Of the three types of apps for Office, the type that Outlook supports is called mail apps. While you use the legacy APIs—the object model, PIA, and MAPI—to automate Outlook at an application level, you can use the JavaScript API for Office in a mail app to interact at an item level with the content and properties of an email message, meeting request, or appointment. You can publish mail apps in the Office Store or in an internal Exchange catalog. End users and administrators can install mail apps for an Exchange 2013 mailbox, and use mail apps in the Outlook rich client as well as Outlook Web App. As a developer, you can choose to make your mail app available for end users on only the desktop, or also on the tablet or smart phone. You can find more information about the apps for Office platform by starting here: Overview of apps for Office.

    Architectural differences between add-in model and apps for Office platform

    Add-in model

    The Office add-in model offers individual object models for most of the Office rich clients. Each object model is intended to automate the corresponding Office client, and allows an add-in to integrate closely with the behavior of that client. The same add-in can integrate with one or multiple Office applications, such as Outlook, Word, and Excel, by calling into each of the Outlook, Word, and Excel object models. Figure 1 describes a few examples of 1:1 relationships between an Office rich client and its object model.

    Figure 1. The legacy Office development architecture is composed of individual client object models.

     

    Apps for Office platform

    The apps for Office platform includes an apps for Office schema. Using this schema, each app specifies a manifest that describes the permissions it requests, its requirements for its host applications (for example, a mail app requires the host to support the mailbox capability), its support for the default and any extra locales, display details for one or more form factors, and activation rules for a mail app to be available in the app bar.

    In addition to the schema, the apps for Office platform includes the JavaScript API for Office. This API spans across all supporting Office clients and allows apps to move toward a single code base. Rather than automating or extending a particular Office client at the application level, the apps for Office platform allows apps to connect to services and extend them into the context of a document, message, or appointment item in a rich or web client. Figure 2 shows Office applications with their rich and web clients sharing a common app platform.

    Figure 2. The apps for Office development architecture is composed of a common platform and individual object models.

     

    One main difference of note is that the object models were designed to integrate tightly with the corresponding Office client applications. However, this tight integration has a side effect of requiring an add-in to run in the same process as the rich client. The reliability and performance of an add-in often affects the perceived performance of the rich client. Unlike client add-ins, an app for Office doesn’t integrate as tightly with the host application, does not share the same process as the rich client, and instead runs in its own isolated runtime environment. This environment offers a privacy and permission model that allows users and IT administrators to monitor their ecosystem of apps and enjoy enhanced security.

    Main features available to mail apps

    Contextual activation: Mail app activation is contextual, based on the app’s activation rules and current circumstances, including the item that is currently displayed in the Reading Pane or inspector. A mail app is activated and becomes available to end users when such circumstances satisfy the activation rules in the app manifest.

    Matching known entities or regular expression: A mail app can specify certain entities (such as a phone number or address) or regular expressions in its activation rules. If a match for entities or regular expressions occurs in the item’s subject or body, the mail app can access the match for further processing.

    Roaming settings: A mail app can save data that is specific to Outlook and the user’s Exchange mailbox for access in a subsequent Outlook session.

    Accessing item properties: A mail app can access built-in properties of the current item, such as the sender, recipients, and subject of a message, or the location, start, end, organizer, and attendees of a meeting request.

    Creating item-level custom properties: A mail app can save item-specific data in the user’s Exchange mailbox for access in a subsequent Outlook session.

    Accessing user profile: A mail app can access the display name, email address, and time zone in the user’s profile.

    Authentication by identity tokens: A mail app can authenticate a user by using a token that identifies the user’s email account on an Exchange Server.

    Using Exchange Web Services: A mail app can perform more complex operations or get further data about an item through Exchange Web Services.

    Permissions model and governance: Mail apps support a three-tier permissions model. This model provides the basis for privacy and security for end users of mail apps.

    Major objects for mail apps

    For mail apps, you can look at the JavaScript API for Office object model in three layers:

    1. In the first layer, there are a few objects shared by all three types of apps for Office: Office, Context, and AsyncResult.
    2. The second layer in the API that is applicable and specific to mail apps includes the Mailbox, Item, and UserProfile objects, which support accessing information about the user and the item currently selected in the user’s mailbox.
    3. The third layer describes the data-level support for mail apps:
      1. There are CustomProperties and RoamingSettings that support persisting properties set up by the mail app for the selected item and for the user’s mailbox, respectively.
      2. There are the supported item objects, Appointment and Message, that inherit from Item, and the MeetingRequest object that inherits from Message. These objects represent the types of Outlook items that support mail apps: calendar items of appointments and meetings, and message items such as email messages, meeting requests, responses, and cancellations.
      3. Then there are the item-level properties (such as Appointment.subject) as well as objects and properties that support certain known Entities objects (for example Contact, MeetingSuggestion, PhoneNumber, and TaskSuggestion).

    Figure 3 shows the major objects: Mailbox, Item, UserProfile, Appointment, Message, Entities, and their members.

    Figure 3. Major objects and their members used by mail apps in the JavaScript API for Office.

    Figure 4 shows all of the objects and enumerations in the JavaScript API for Office that pertain to mail apps.

    Figure 4. All objects for mail apps in the JavaScript API for Office.

    Figure 5 is a thumbnail of a diagram with all the objects and members that mail apps use. Zoom into the diagram at http://go.microsoft.com/fwlink/?LinkId=317271.

    Figure 5. All objects and members used by mail apps in the JavaScript API for Office.

    The following are common reasons why mail apps are a better choice for developers than add-ins:

    • You can use existing knowledge of and the benefits of web technologies such as HTML, JavaScript, and CSS. For power users and new developers, XML, HTML, and JavaScript require less significant ramp-up time than COM-based APIs such as the Outlook object      model.
    • You can use a simple web deployment model to update your mail app (including the web services that the app uses) on your web server without any complex installation on the Outlook client. In fact, any updates to the mail app, with the exception of the app manifest, do not require any updating on the Office client. You can update the code or user interface of the mail app conveniently just on the web server. This presents a significant advantage over the administrative overhead involved in updating add-ins.
    • You can use a common web development platform for mail apps that can roam across the Outlook rich client and Outlook Web App on the desktop, tablet, and smartphone. On the other hand, add-ins use the object model for the Outlook rich client and, hence, can run on only that rich client on a desktop form factor.
    • You can enjoy rapid turnaround of building and releasing apps via the Office Store.
    • Because of the three-tier permissions model, users and administrators can appreciate better security and privacy in mail apps than add-ins, which have full access to the content of each account in the user’s profile. This, in turn, encourages user consumption of apps.
    • Depending on your scenarios, there are features unique to mail apps that you can take advantage of and that are not supported by add-ins:
      • You can specify a mail app to activate only for certain contexts (for example, Outlook displays the app in the app bar only if the message class of the user-selected appointment is IPM.Appointment.Contoso, or if the body of an email contains a package       tracking number or a customer identifier).
      • You can activate a mail app if the selected message contains some known entities, such as an address, contact, email address, meeting suggestion, or task suggestion.
      • You can take advantage of authentication by identity tokens and of Exchange Web Services.

    Reasons to choose add-ins

    The following features are unique to add-ins and may make them a more appropriate choice than mail apps in some circumstances:

    • You can use add-ins to extend or automate Outlook at an application-level, because the object model and PIA have extensive integration with Outlook features (such as all Outlook item types, user interface, sessions, and rules). At the item-level, add-ins can interact with an item in read or compose mode. With mail apps, you cannot automate Outlook at the application level, and you can extend Outlook’s functionality in the context of only the read-mode of the supported items (messages and appointments) in the user’s mailbox.
    • You can specify custom business logic for a new item type.
    • You can modify and add custom commands in the ribbon and Backstage view.
    • You can display a custom form page or form region.
    • You can detect events such as sending an item or modifying properties of an item.
    • You can use add-ins on Outlook 2013 and Exchange Server 2013, as well as earlier versions of Outlook and Exchange. On the other hand, mail apps work with Outlook and Exchange starting in Outlook 2013 and Exchange Server 2013, but not earlier versions.

    Conclusion

    When you are considering creating a solution for Outlook, first verify whether the supported major features and objects of the apps for Office platform meet your needs. Develop your solution as a mail app, if possible, to take advantage of the platform’s support across Outlook clients over the desktop, tablet, and smartphone form factors. Note that there are still some circumstances where add-ins are more appropriate, and you should prioritize the goals of your solution before making a decision.

    Further references

    Apps for Office and mail apps

    How To : Use the Content Query Web Part for SharePoint 2013 Search

    Meeting client requirements with SharePoint often involves aggregating items somehow – often we want to display things like “all the overdue tasks across all finance sites”, or “navigation links to all of the subsites of this area” or “related items (e.g. tagged with the same term)” and so on. In SharePoint 2010 there have been two main ways of accomplishing this:

    SharePoint-2013-Service-Pack-1-225x93

    • Content Query web part
    • Custom solution built on SPSiteDataQuery (site collection-scoped), SPQuery (list-scoped) or search API

    To a lesser extent, using the search web parts as part of a custom solution may also have been an option. Regardless, it was common to need custom code to meet such requirements. Maybe we needed to add paging to the results, or we needed to use some value obtained dynamically through code (e.g. from the current site/current page/current user/something else) – several Codeplex solutions arose from this gap, and lots of lines of code were written.

    SharePoint 2013 presents the Content Search web part as a new option – it’s capabilities mean that simply using the web part (with some front-end work to meet look and feel requirements) will meet many needs, without use of custom code. If you’re a developer, the following screenshot should give you a clue as to why code won’t be required too often (with one of my favorite options highlighted):

    CSWP_BasicsTab_AdvancedMode_PropertyFilterValues

    It’s incredibly powerful, and it’s a good idea to understand what it can do.

    Understanding the deal with search-based solutions

    As the name suggests, the Content Search web part is powered by SharePoint’s search function. As such, there are the following considerations:

    • The CSWP can be configured to “see” items anywhere in SharePoint (potential advantage)
      • In contrast, the CQWP and related SPSiteDataQuery can only search within the current site collection – the site collection “boundary” is a factor
    • Results shown are not guaranteed to be 100% up-to-date (potential disadvantage) 
      • Since a search crawl has to run before any content changes will be shown in search results (remember this can include titles, summaries, images and so on for pages/documents), if a user creates/edits an item it will not be shown immediately. This can be a critical point.
      • Furthermore, my understanding from a FAST engineer is that in SharePoint 2013 there is no longer any means of pushing a document directly into the search index – in previous FAST incarnations including FAST for SharePoint 2010, there were options such as docpush.exe for “proactively” add an item to the index, rather than waiting for the next search crawl.
      • That said, it should be possible to obtain much lower indexing latencies in SharePoint 2013 via the “Continuous Crawl’” capability. In most deployments, my guess would be that changes would be reflected within a few minutes at most if this is enabled (where previously you may have had an incremental crawl scheduled every 15, 30 or 60 minutes for a SharePoint sites content source.

    Summary – if the functionality you are creating needs fully up-to-date results (e.g. a user has created/edited something and it needs to be immediately reflected in the site) then you will probably need to stick with the original approaches (i.e. a query-based rather than search-based solution).

    Terminology – new concepts in SharePoint 2013 search

    So if we’re going to build solutions built on SP2013 search, we need to have a basic understanding of some concepts – we’ll run into these time and time again:

    Concept

    My quick definition

    Result Source Like a search ‘scope’ in SP2007/SP2010, but on steroids. Rules are specified to say what the scope consists of – e.g. DOCUMENTS in my TEAM SITES area (constraining on content type and path in this example).

    Created centrally, or at the web level. Result Sources can be used in just about any search-related functionality, including the Content Search web part.

    Query Rule Like a ‘best bet’ on steroids. Ability to do specially formatted results at top of results list (e.g.Promoted Result) for highly-recommended content. In addition to Promoted Result, we can also do a Result Block (example could be a block of 5 image results within main list of text links).

    Another option is to Change the Ranked Results – i.e. put something at the top, promoteor demote something by 1-10 (previously known as a ‘boost’ in FAST)

    LOTS of flexibility in matching the user’s query, including regular expressions and matching terms in the Managed Metadata store.

    Display Templates A Display Template is a JavaScript template (similar to jQuery templates) which controls formatting – in the case of the CSWP, this effectively replaces the use of XSL for look and feel. There is a separate template to pick for the overall control and formatting of an individualitem. The .js files for the templates are stored in the ‘Content Web Parts’ subfolder of the Master Page Gallery.

    Side note – in the context of a search results page (rather than CSWP), a Display Template is associated with a Result Type (e.g. Word doc, wiki page, PowerPoint file etc.) and so we have granular control over how each is displayed (and when). Extremely cool.

    So, lots of flexibility in the search infrastructure. Let’s see some of this in the context of the Content Search web part.

    Configuring the Content Search web part

    There are two main aspects to this:

    • Displaying the right items (Search Criteria)
    • Look and feel (Display Templates)

    In terms of the search criteria, there is enormous flexibility in what the CSWP – and the underlying search capability – can do. For one thing, it’s possible to either directly configure the query entirely in the properties of this web part instance (e.g. show me all documents which meet criteria X), and/or start from a pre-existing Result Source to do some of the filtering. Combining the approaches will be fairly common – an example could be “search only on wiki pages” (an OOTB Result Source) but only show items tagged with X (this defined directly in the CSWP properties).

    Interestingly, configuring a centralized Result Source and a Content Search web part on a page are very similar, even though it would seem some sort of “reusable scope” and a web part are very different things in SharePoint. The overlap comes because underneath both there is a search query which does the work of isolating the desired results – indeed, as we’ll see later the same “Query Builder” UI is used in both places (with a couple of minor differences). So, if you’ve learnt how to configure a CSWP you’ve essentially also learned how to create  a custom Result Source.

     

    Configuring the web part

    The first thing to understand is that the Content Search web part appears in different guises in the web part gallery. The ‘main’ web part is in the ‘Content Rollup’ category:

    CBS_MainWebPartInAdder

    But there are also many pre-configured versions available, each of which finds a specific type of content. This is great for end-users who don’t necessarily think in terms of needing a ‘Content Search’ web part:

    CBS_WebPartsInAdder
    And just to prove the point, the web parts above correspond to the following .webpart definition files in the Web Part Gallery:

    CBS_WebParts

    Once the web part has been added to the page, it can be configured by it’s tool pane. The main configuration item is the query to use, and this can be started by clicking the ‘Change query’ button:

    CSWP_properties
    This opens the “’Build Your Query” dialog – this has tabs labeled BASICS, REFINERS, SORTING, SETTINGS and TEST. This thing is known (unsurprisingly) as the Query Builder – what you might not realize, is that it’s used in several places in SharePoint 2013:

    • Configuring a Content Search web part (obviously)
    • Creating a Result Source (specifically in the Query Transform section)
    • Configuring a Search Results web part

    There are some differences – for example, when configuring a Search Results web part there is no SORTING tab because this will be handled in the Result Source or the query. I’m going to talk about things from the perspective of the Content Search web part, but will call out any differences for the other usages – so hopefully by learning the CSWP, you also get to learn 75% of the search infrastructure.

    BASICS tab – Quick Mode

    Although the first tab is labeled ‘BASICS’, I’d say it’s actually the most involved – this is where the query itself is configured, and there is a ‘Quick Mode’ and ‘Advanced Mode’. You’ll also notice that – and let me just say I’d personally be willing to give the Product Manager for this feature A BIG HUG for this – that there’s a “live” results preview pane, permanently visible on the right-hand side of the Query Builder. This shows the first 10 results which would display from running the currently configured search against the current index, without the need to save the web part after each change:

    CSWP_BasicsTab_QuickMode

    Note that if you create your own query, then this preview pane is only able to show results when you are on the TEST tab. And we’ll talk about that towards the end.

    Let’s now walk through the various configuration steps in here.

    Select a query

    In Quick Mode, the dropdown contains the Result Sources (see my definition above if you’ve forgotten already :)) which come out-of-the-box with SharePoint 2013 – one of these may provide a good foundation for what you need:

    CSWP_BasicsTab_QuickMode_SelectQuery
    As you select a Result Source from the dropdown, other options may become available lower down. So if I want to find items matching a specific content type, I get this:

    RestrictByContentType
    In fact, this option to restrict by content type appears for many of the pre-defined Result Sources, not just “Items matching a content type” – which makes sense, because it’s a common thing to include as a filter. Similarly, “Items matching a tag” and several other queries give this interface for selecting a tag to filter on:

    RestrictByTag
    And, happy days, if I specify the tag by typing one I get auto-complete to help me pick the term – this is a fully-fledged Managed Metadata input field. Consequently there’s also full validation of the terms you type-in (though this takes a few seconds to show), so if an author accidentally enters something which isn’t a known term, he/she should spot the mistake immediately:

    TermValidation

    Consider also that those middle options of using the navigation term associated with the current page is exactly what’s needed to build many types of ‘related items’ functionality – again, no code needed now.

    Restrict results by app

    In the next section, I can restrict the scope of the results to a particular location (e.g. the current site). This enables me to get something like the Content Query web part behavior of only searching within the current site collection if needed – because although we now have the power, it won’t always make sense to go across the entire farm 🙂

    RestrictByApp

    Add additional filters

    In the next section I can supplement the query with any valid query text, e.g. a property filter. In this example, I’m adding a filter to only present items which werecreated by the current user:

    AdditionalFilter

    Sort results

    When we scope our query to a pre-defined Result Source (as we are here in the CSWP ‘Quick Mode’), then sorting is usually pre-defined at that level. The CSWP does give us the opportunity to override sorting based on based on some popularity ranking models (around most viewed/most clicked) instead though – expect proper wording to appear in this dropdown in the RTM version, but you get the idea: 

    SortResults
    So what happens if none of the options presented so far do what you want? An example could be wanting to use an existing Result Source (e.g. ‘wiki pages’) but sort on Last Modified in descending order. Obviously the dropdown above does not allow that. We could create a custom Result Source and implement the query/sorting there, but that only really makes sense if we expect it to be re-used in multiple places.

    In these cases, we can click into Advanced Mode (still on the BASICS tab).

    BASICS tab – Advanced Mode

    In Advanced Mode you basically get to specify the full query text yourself. In my mind, this is like building a solution with the search API in SP2007/SP2010 – I saw many custom solutions (and built several myself) which used the FullTextSqlQuery or KeywordQuery classes to find the right items. SharePoint 2013 makes it much easier to have this full control whilst still piggybacking onto the out-of-the-box web parts – meaning less work and more productivity.

    When switching to the Advanced Mode, a couple of things become available:

    • A SORTING tab (details later)
    • Controls to help you build the query (which you’d previously do essentially by hand in earlier versions), with ‘Keyword filter’ and ‘Property filter’ options. These can be combined as you like, and the resulting query text appears in the textbox at the bottom:

    CSWP_BasicsTab_AdvancedMode

    Avoid custom code by using tokens

    There are many tokens which can be used when building a query in this way – often you might want to pass something into the query, such as a URL (querystring) parameter, the value in a particular field on the page, and so on. Being able to do this unlocks a huge range of possibilities for building solutions. This is where the first image in this article comes from – here’s a reminder:

    CSWP_BasicsTab_AdvancedMode_PropertyFilterValues

    In summary, when using the Advanced Mode of the query builder you should be able to target just about any content in your SharePoint environment.

    SORTING tab (Advanced Mode only)

    In SharePoint 2010 Enterprise Search, you could only sort by relevance/rank (the normal search engine approach) or date. FAST for SharePoint 2010 had more options (you could sort by a Managed Property). In SharePoint 2013, frankly the sort options alone are enough to blow your mind 🙂  If you don’t need anything specific around sorting then you can skip this bit, but if you do then here are your options:

    First you can sort by way more things than just rank and date:

    CSWP_SortTab
    One thing to note there – I’m unclear as to what makes it into that ‘Sort by’ list and what does not. It’s not Managed Properties as far as I can tell, so although the list is long many options may not be hugely useful. Still, better than before.

    Usefully, you can now do multi-level sorting (sort by this, then by that). The ‘Add sort level’ link in the image above adds another row, allowing me to do things like sorting by URL depth (so items higher up in the site hierarchy show at the top), and then by rank (that makes sense, because there’ll be lots of items at the same URL depth so I do need two levels of sorting):

    CSWP_SortTab_Custom

    Note that effectively what I’m doing here is building some sort of custom ranking model. This works great if I need something very specific on sorting, but also note SharePoint 2013 comes with several ranking models – the next section allows me to pick from these if I’ve left the ‘Sort by’ dropdown on ‘Rank’, unlike in the image above. This is because all these options are effectively different forms of rank – most are around People Search or popularity:

    CSWP_SortTab_RankingModel

    And for those occasions when the client is telling you that his/her strategic document really has to be on page 1 of the results (but not a Promoted Result/best bet), you have ‘Dynamic ordering’ – you can boost/demote results, including the option to promote to the top:

    CSWP_SortTab_DynamicOrdering

    REFINERS tab

    In the context of search, refiners are usually the links on the search engine’s results page (typically in the left nav) which allow the user to further filter the results. So if I do a search for “meeting minutes” and get lots of results, it would be nice to be able to filter by, say:

    • Date range
    • SharePoint site (since minutes might be stored in individual project sites)
    • Author
    • ..and so on

    However, in the context of the Content Search web part, refiners actually allow you to do this filtering as part of the initial query. The REFINERS tab is effectively a convenience to you, the person configuring the web part – what happens is that a search is performed whilst in edit mode, and all relevant refiners (e.g. managed properties) are presented as available refiners. These can be selected and moved over to the right-hand list:

    CSWP_RefinersTab
    The effect of this is that a further filter is added to my query. In the example above, this may be easier than using a Property Filter on the BASICS tab – since there I have little support, I just select the property and type the value:

    CSWP_BasicsTab_PropertyFilter
    In the REFINERS tab, SharePoint is doing the search for me (as it’s configured so far), and only coming back with values which have been found in the returned results.

    SETTINGS tab

    The SETTINGS tab controls some high-level options for running the search:

    CSWP_SettingsTab

    Query rules

    Since these can be defined at the parent site or search service, it could be the case that your CSWP gets affected by one of these. As the radio button shows, this can be overridden, but consider that some types of Query Rules may not have an effect anyway – as a reminder (from the table at the beginning), a Query Rule can either:

    • Add a promoted result
    • Add a result block
    • Change the ranked results somehow (by modifying the query)

    Out of these 3 actions, 1.5 of them could affect the results of a ‘default’ CSWP. This can be summarized:

    Query Rule Action

    Will affect CSWP results?

    Add a promoted result Not by default. When a search runs in SharePoint, multiple result sets are returned (e.g. ‘main results’, ‘best bet results’ and so on – in SP2013, the real names for these are ‘RelevantResults’, ‘SpecialTermResults’, ‘PersonalFavoriteResults’ and ‘RefinementResults’.). Although a CSWP can be configured to show any table, the default is ‘RelevantResults’ – and a promoted result gets added to ‘SpecialTermResults’.
    Add a result block Yes if result block is configured to show ‘ranked within core results’ (the default), rather than ‘shown above core results’.
    Change ranked results Yes.

    For completeness, here’s the place in the CSWP where you select which search result set to use (e.g. if you want to switch from the default of ‘RelevantResults’:

    CSWP_ResultTableSelection

    Options in the Results Table dropdown (shown to the left):

    CSWP_ResultTableSelectionOptions

    URL rewriting

    This one is fairly simple – if results are being returned from a catalog which is using “friendly” URLs, then the CSWP can override this to use the original URLs. It may not always make sense to use rewritten URLs in aggregations outside of the catalog pages, especially if you’ve implemented anything funky there.

    Loading behavior

    This is useful – specify whether the CSWP web part instance should load in the main page load (default) or in an AJAX manner after the main page has finished. Considering that a CSWP could either be the centerpiece of your landing page or merely some page footer navigation, it’s nice to be able to prioritize in this way.

    Priority

    Similarly, we can actually specify High, Medium or Low priority for each CSWP instance we use – great for the different usages we will have, although as per the description, note this only has any effect if the search service is overloaded.

    TEST tab

    The TEST tab is hugely useful – it provides you the ability:

    • To see the underlying query text (in Keyword Query Language [KQL]) which has been generated (though it must be edited in other tabs)
    • To see the preview when you are defining a query yourself (the preview pane will be empty on other tabs in this scenario)

    CSWP_TestTab_Less
    Which is all great, but at first glance it’s easy to miss some extra functionality – if the ‘Show more’ link is clicked, other information becomes visible including details on any refiners and Query Rules which have been applied. So below I can see that a custom Query Rule I created has indeed been used, so there’s no guesswork on (for example) whether a certain item is actually being promoted or not:

    CSWP_TestTab_More

    Sidenote – listing items from ONE site/list/library with the Content Search web part

    Worthy of a quick note – if all you need to do is roll-up content from one list/library, then you can do this with the CSWP – in the query, simply restrict the search using PATH:[URL to document library]. The Query Builder UI helps you do this by providing the ‘Restrict by app’ area:

    CSWPrestricttositeorlibrary_thumb2

    N.B. note that one potential gotcha here can be that you need ‘HTTP’ if your sites are browsed on HTTPS but crawled on HTTP (as in my case).

    If you do want to filter by site/list/library, consider of course that the good ol’ Content Query web part will work just fine here, and you’ll get instant changes as content is changed. What you won’t have, is the Content Search Web Part’s ability to automatically use tokens in the query (e.g. value of current navigation category, value from current user’s profile etc.)

    Summary

    The Content Search web part is a great tool in the SharePoint consultant’s box of tricks. Configuration may prove quite simple for some scenarios, but there is also huge amount of flexibility and so a certain degree of complexity comes with that. Many advanced scenarios which make use SP2013 search capabilities (such as Result Sources, Query Rules, promoted results and so on) will be possible – knowing the details will help you identify whether the CSWP can be the answer to a particular problem or not.

    Introduction to the Microsoft Monitoring Agent

    You can now download the Microsoft Monitoring Agent and install on any server in your enterprise that meet the minimum installation requirements.

    The Microsoft Monitoring Agent is a simple installation that is included with System Center Operations Manager 2012 R2 or can be installed separately to be used in a standalone manner. When using the Microsoft Monitoring Agent as a standalone tool the data captured is available as a Visual Studio IntelliTrace file. These files can be opened with Visual Studio Ultimate 2013 Release Candidate.

    The rest of this post will focus on using the Microsoft Monitoring Agent in a standalone mode.

    Using MMA in standalone mode

    After installing the Microsoft Monitoring Agent you enable monitoring of your IIS hosted application using PowerShell commands. Let’s say that I have installed my application (FabrikamFiber) under the default application node on my IIS server as shown below.

    To start monitoring this application I use the following PowerShell command running as Administrator:

    A couple of things to notice:

    • The name of the application that I am monitoring includes the site name (Default Web Site) as well as the application name (FabrikamFiber.Web).
    • I chose the monitor mode. My options are to use monitor, trace, or custom. I will focus on the monitor mode in this blog post.
    • The output path is a file location that I have already created. You may wish to make this location shared so that both developers and operations team members can access the files that get created in that location.

    Monitoring mode has been designed to have minimal impact on the running application and only records data when the Microsoft Monitoring Agent detects undesirable behavior such as problematic exceptions occur or performance is poor. The default settings record exceptions that bubble up to the global exception handler and pages that  take longer than 5 seconds to respond from the server. In addition to the conditions that trigger data collection the Microsoft Monitoring Agent will also limit the frequency in which it will record data to a maximum of 60 similar events per day.

    There are a couple of approaches that you can take to use the Microsoft Monitoring Agent in monitoring mode effectively. One method is to gather an IntelliTrace file after a problem has been reported. A more proactive approach is to gather an IntelliTrace file on a regular schedule, such as daily, to check for problematic behavior in your application that may not have been reported by your users.

    Generating and Using the IntelliTrace file

    To produce an IntelliTrace file you use the Checkpoint-WebApplicationMonitoring command as shown below:

    The IntelliTrace file can then be opened using Visual Studio 2013 Release Candidate. Any performance violations are listed in the Performance Data section of the IntelliTrace Summary page and exceptions are shown in the Exceptions section.

    Diagnosing exceptions remains very similar to the experience available with Visual Studio 2012. Diagnosing performance events is detailed in a separate blog post.

    For detailed information on the Microsoft Monitoring Agent refer to TechNet.

    For expanded scenarios and usage information refer to MSDN.

    Microsoft Application Insights – A comprehensive guide : Part 1 – Getting Started

    Application Insights Overview

    With Application Insights, we have the same excellent capabilities from the Application Performance Monitoring (APM) feature (formerly AviCode) available in SCOM 2012 R2 – with the exception that it’s all run from the cloud as part of Visual Studio Online. With this type of monitoring for your applications, you can ensure that they are available and performing optimally, while leveraging usage data to drive improvements and trends.

    In relation to the Microsoft Management Agent used for both Application Insights and SCOM 2012 R2, they share the same source code but have a slight difference with serialization that determines which REST interface and location the agent sends its performance data to.

    Application Insights supports both .NET and Java web applications. On the Java side of the house, it supports monitoring Tomcat 6, Tomcat 7 or JBoss 6. For the purposes of this deep-dive series however, I’m going to concentrate on demonstrating its capabilities monitoring .NET web applications and I’ll save the Java monitoring for a later post.

    You can use Application Insights to monitor web applications that are running in an on-premise/virtual machine scenario and of course, it’s also fully supported to monitor web applications running as a web role in Windows Azure Cloud Services. If you’re a Windows Phone app developer, you might be interested in the capability to view usage trends and other analytical data as users download and use your app on a daily basis.

    The method of getting these different environments monitored varies depending on scenario but typically, it’s a straight-forward enough process as you’ll understand when you read through this blog series.

    Regardless of the environment you’re monitoring with Application Insights, you can ensure you’re kept up to date with any performance issues (slow responses, uncaught exceptions etc.) by enabling email notification direct to your inbox. If you want to use Visual Studio to view the stack trace to help triage the problem, then this is an easy option too.

    So, that’s a high-level overview of what Application Insights can do, now let’s get started!

    Creating Your Account

    The first thing you’ll need to do is to create a new Visual Studio Online account by clicking on the following link to sign up:

    http://www.visualstudio.com/products/visual-studio-online-overview-vs

    Click on the ‘Ready to Go?’ tile

    Enter your Microsoft Account (formerly Windows Live ID) details, then click the Sign In button. If you don’t yet have a Microsoft Account, you can sign up for a new one here.

    Input all your details into the ‘Create a Visual Studio Online Account’ window, then hit the Create Account button to move on.

    Once you’ve created your Visual Studio Online account, you’ll need to specify a name for your first project. Call it what you want, then hit the Create Project button to finish.

    Now, at the Overview screen of your Home page, you should see a Blue tile titled ‘Try Application Insights’ as shown below. If you can’t see the Blue AI tile, then click the Help button and choose the ‘Display Announcement’ option from the resulting menu.

    Click on the Blue tile and you’ll be taken to the *Insights view where you’ll be prompted for an invitation code to gain access

    Invitation Code? I hear you ask.
    Don’t worry if you don’t have one, even though AI is still only available as a Preview, the good folks over at Microsoft have made a public code available at the following link:
    Type in your code, then hit the Get Started button to enter the new world of Application Insights!
    That’s it for Part 1 of this series. In Part 2, we’ll start work on building a demo .NET web application environment for us to give our new Application Insights account a test drive in.

    SharePoint Development roles urgently needs to be filled at MS Gold Partner – Contact me now for more information (Sorry, No recruiters, i am filling private positions)

    Senior SharePoint Developers needed urgently for MS Gold Partner in Sandton/Bryanston :

    3 – 5 years of development experience.

    2 year experience in SharePoint.

    3 years experience in C#.

    A minimum of 3 years experience in Visual Studio .NET 2005 – 2008.

    A minimum of 3 years experience in ASP.NET , HTML web development.

    A minimum of 3 years experience with Javascript.

    A minimum of 3 years experience with Windows XP, Windows 2003 and Windows Vista.

    A minimum of 3 years experience in relational database design and implementation with SQL Server
    Advantageous (nice-to-have):

    • Windows SharePoint Server.
    • Microsoft Office SharePoint Server.
    • BizTalk
    • Web Analytics
    • Microsoft CRM
    • K2

    Various Senior .Net Developet Positions available at MS Gold Partner and Part of the Britehouse Group! Contact me now! (Sorry. No recruiters – I am filling private positions)

    Required (not-negotiable):

    ·         A minimum of 4 years experience developing code in C# and / or VB.NET and ASP.NET

    ·         A minimum of 48 months Visual Studio 2005 / 2006 and / or 2008 experience.

    ·         A minimum of 48 months Transact-SQL (Stored procedures, views and triggers) experience.

    ·         A minimum of 48 months relational database design implementation using MS SQL Server 2000 / 2005 and / or 2008 experience.

    ·         A minimum of 48 months HTML experience.

    ·         A minimum of 48 months Javascript experience.

    ·         5 years experience of leading a development team.

    ·         Proficiency in technical architecture and high-level design, as well as test framework design and implementation.

    Senior Developers must be able to perform as a Tech-Lead developers with the following tasks:

    ·         Technical lead for development, design and implementation of .NET based solutions as part of the projects team.

    ·         Collaborate with Developers, Account Managers and Project Managers.

    ·         Estimate development tasks and execute well on project schedules.
    ·         Interact with clients to create requirement specifications for projects.
    ·          Innovate new solutions and keep up with new emerging technologies.

    ·         Mentoring of other developers.
    Advantageous (nice-to-have):

    ·         3 years computer science degree or equivalent.

    Windows SharePoint Server.
    ·        
    Microsoft Office SharePoint Server.
    ·        
    BizTalk.
    ·        
    Microsoft CRM.
    ·        
    Experience in web analytics.

    Now available – A SharePoint XML Indexing Connector

    Most organizations have several systems holding their data. Data from these systems must be indexable and made available for search on the common Internal Search portal.

    While most of the different data silos are able to dump or export their full dataset as XML, SharePoint does not include an OOTB general purpose XML indexing connector.

    The SharePoint Server Search Connector Framework is known to be overly complex, and documentation out there about this subject is very limited.

    There are basically two types of custom search connectors for SharePoint 2010 that can be implemented; the .Net Assembly Connector and the Custom Connector. More details about the differences between them can be found here. Mainly, a Custom Connector is agnostic of external content types, whereas each .NET Assembly Connector is specific to one external content type, and whenever the external content type changes, the .Net Assembly Connector must be re-compiled and re-deployed. If the entity model of the external system is dynamic and is large scale a Custom Connector should be considered over the .Net Assembly Connector.

    Also, a Custom Connector provides administration user interface integration, but a .NET Assembly Connector does not.

    The XML File Indexing Connector

    The XML File Indexing Connector that is presented here is a custom search indexing connector that can be used to crawl and index XML files. In this series of posts I am going to first show you how to install, setup and configure the connector. In future posts I will go into more implementation details where we’ll look into code to see how the connector is implemented and how you can customize it to suit specific needs.

    This post is divided into the following sections:

    • Installing and deploying the connector
    • Creating a new Content Source using the connector
    • Using the Start Address of the Content Source to configure the connector
    • Automatic and dynamic generation of Crawled Properties from XML elements
    • Full Crawl vs. Incremental Crawl
    • Optimizations and considerations when crawling large XML files
    • Future plans

    Installing and deploying the connector

    The package that can be downloaded at the bottom of this post, includes the following components:

    1. model.xml: This is the BCS model file for the connector
    2. XmlFileConnector.dll: This is the DLL file of the connector
    3. The Folder XmlFileConnector: This includes the Visual Studio Solution of the connector

    Follow these steps to install the connector:

    1. Install the XmlFileConnector.dll in the Global Assembly Cache on the SharePoint application server(s)

    gacutil -i “XmlFileConnector.dll”

    1. Open the SharePoint 2010 Management Shell on the application server.
    2. At the command prompt, type the following command to get a reference to your FAST Content SSA.

    $fastContentSSA = Get-SPEnterpriseSearchServiceApplication -Identity “FASTContent SSA”

    1. Add the following registry key to the application server

    [HKEY_LOCAL_MACHINE]SOFTWAREMicrosoftOfficeServer14.0SearchSetupProtocolHandlersxmldoc

    Set the value of the registry key to “OSearch14.ConnectorProtocolHandler.1”

    1. Add the new Search Crawl Custom Connector

    New-SPEnterpriseSearchCrawlCustomConnector -SearchApplication $fastContentSSA –Protocol xmldoc -Name xmldoc -ModelFilePath “XmlFileConnectorModel.xml”

    1. Restart the SharePoint Server Search 14 service. At the command prompt run:

    net stop osearch14

    net start osearch14

    8.  Create a new Crawled Property Category for the XML File Connector. Open the FAST Search Server 2010 for SharePoint Management Shell and run the following command:

    New-FASTSearchMetadataCategory -Name “Custom XML Connector” -Propset “BCC9619B-BFBD-4BD6-8E51-466F9241A27A”

     Note that the Propset GUID must be the one specified above, since this GUID is hardcoded in the Connector code as the Crawled Properties Category which will receive discovered Crawled Properties.

    Creating a new Content Source using the XML File Connector

    1. Using the Central Administration UI, on the Search Administration Page of the FAST Content SSA, click Content Sources, then New Content Source.
  • Type a name for the content source, and in Content Source Type, select Custom Repository.
  • In Type of Repository select xmldoc.

  • In Start Addresses, type the URLs for the folders that contain the XML files you want to index. The URL should be inserted in the following format:

  • xmldoc://hostname/folder_path/#x=:doc:id;;urielm=url;;titleelm=title#

    The following section describes the different parts of the Start Address.

    Using the Start Address of the Content Source to configure the connector

    The Start Address specified for the Content Source must be of the following format. The XML File Connector will read this Start Address and use them when crawling the XML content.

    xmldoc://hostname/folder_path/#x=:doc:id;;urielm=url;;titleelm=title#

    xmldoc

    xmldoc is the protocol corresponding to the registry key we added when installing the connector.

    //hostname/folder_path/

    //hostname/folder_path/ is the full path to the folder conaining the XML files to crawl.

    Exmaple: //demo2010a/c$/enwiki

    #x=doc:id;;urielm=url;;titleelm=title#

    #x=doc:id;;urielm=url;;titleelm=title# is the special part of the Start Address that is used as configuration values by the connector:

    x=:doc:id

    Defines which elements in the XML file to use as document and identifier elements. This configuration parameter is mandatory.

    For example, say a we have an XML file as follows:

    <feed> <document> <id>Some id</id> <title></title> <url>some url</id> <field1>Content for field1</field1> <field2>Content for field2</field2> </document> <document> ... </document> </feed>

    Here the value for the x configuration parameter would be x=:document:id

    urielm=url

    urielm=url defines which element in the XML file to use as the URL. This will end up as the URL of the document used by the FS4SP processing pipeline and will go into the ”url” managed property. This configuration parameter can be left out. In this case, the default URL of the document will be as follows: xmldoc://id/[id value]

    titleelm=title

    titleelm=title defines which element in the XML file to use as the Title. This will end up as the Title of the document, and the value of this element will go into the title managed property. This configuration parameter can be left out. If the parameter is left out, then the title of the document will be set to ”notitle”.

    Automatic and dynamic generation of Crawled Properties from XML elements

    The XML File Connector uses advanced BCS techniques to automatically Discover crawled properties from the content of the XML files.

    All elements in the XML docuemt will be created as crawled properties. This provides the ability to dynamically crawl any XML file, without the need to pre-define the properties of the entities in the BCS Model file, and re-deploy the model file for each change.

    This is defined in the BCS Model file on the XML Document entity. The TypeDescriptor element named DocumentProperties, defines an list of dynamic property names and values. The property names in this list will automatically be discovered by the BCS framework and corresponding crawled properties will automatically be created for each property.

    The following snippet  from the BCS Model file shows how this is configured:

      <TypeDescriptor Name="DocumentProperties" TypeName="XmlFileConnector.DocumentProperty[], XmlFileConnector, Version=1.0.0.0, Culture=neutral, PublicKeyToken=109e5afacbc0fbe2" IsCollection="true">        xxxxx          xxxx          xxxxxx              

    In addition to the ability to discover crawled properties automatically from the XML content, the XMl File Connector also creates a default property with the name “XMLContent”. This property contains the raw XML of the document being processed. This enables the use of the XML content in a custom Pipeline extensibility stage for further processing.

    Example

    Say that we have the following XML file to index.

     Wikipedia: Nobel Charitable Trust  http://en.wikipedia.org/wiki/Nobel_Charitable_Trust  The Nobel Charitable Trust (NCT) is a charity set up by members of the Swedish Nobel family, i.e.    Michael Nobel Energy Award	http://en.wikipedia.org/wiki/Nobel_Charitable_Trust#Michael_Nobel_Energy_Award  References	http://en.wikipedia.org/wiki/Nobel_Charitable_Trust#References        ...      

    When running the connector the first time; we see the following Crawled Properties discovered in the Custom XML Connector Crawled Properties Category.

    Full Crawl vs. Incremental Crawl

    The BCS Search Connector Framework is implemented in such a way that keeps track of all crawled content in the Crawl Log Database. For each search Content Source, a log of all document ids that have been crawled is stored. This log is used when running subsequent crawls of the content source, be it either a full or an incremental crawl.

    When running an incremental crawl, the BCS framework compares the list of document ids it received from the connector against the list of ids stored in the crawl log database. If there are any document ids within the crawl log database that have not not been received from the connector, the BCS framework assumes that these documents have been deleted, and will attemp to issue deletion operations to the search system. This will cause many inconsistencies, and will make it very difficult to keep both  the actual dataset and the BCS crawl log in sync.

    So, when running either a Full Crawl or an Incremental Crawl of the Content Source, the full dataset of the XML files must be available for traversal. If there are any items missing in subsequent crawls, the SharePoint crawler will consider those as subject for deletion, and og ahead and delete those from the search index.

    One possible work around to tackle this limitation and try to avoid (re)-generating the full data set each time something minor changes, would be to split the XML content into files of different known update frequences, where content that is known to have higher update rates is placed in separate input folders with separate configured Conetent Sources within the FAST Content SSA.

    Optimizations and considerations when crawling large XML files

    When the XML File Connector starts crawling content, it will load and parse found XML files one at the time. So, for each XML file found in the input directory, the whole XML file is read into memory and cached for all subsequent operations by the crawler until all items found in the XML file have been submitted to the indexing subsystem. In that case, the memory cache is cleared, and the next file is loaded and parsed until all files have been processed.

    For the reason just described, it is recommended not to have large single XML files, but split the content across multiple XML files, each consisting of a number of items the is reasonable and can be easily parsed and cached in memory.

    Contact me at tomas.floyd to find out more about this Connector and other custom developed SharePoint and Office 365 Web Parts and Apps!!

    Duet Enterprise and NetWeaver – Feautures of and Benefits for businesses by integrating SAP with SharePoint

    ImageImage

    For years organisations have been scratching their heads trying to figure out how to provide collaboration capabilities on data held within SAP systems.

    Many have developed custom solutions and achieved mixed results. We have also seen the rise of Duet 1.x from Microsoft and SAP that provided 11 specific solutions that surfaced data residing in SAP via Microsoft Office. These were good solutions but limited due to their lack of extensibility.

    Hence the excitement over Duet Enterprise and the benefits that have been realised are exactly what everyone has been looking for. InfoSys Technologies have just released a case study with Microsoft that discusses their Duet Enterprise project and it is an interesting read. Particularly the benefits realised:

    1. Minimal Development Effort
    2. Higher Adoption

    3. Enhanced Productivity

    How can these benefits be real?

    Well, to start with, Duet Enterprise provides all the relevant plumbing to blend both SAP and SharePoint “Platforms”. SAP and Microsoft position Duet Enterprise as the “Foundation” layer between the two platforms.

    DE Architecture

    Architecturally speaking the Duet Enterprise Add-on’s must be installed on SAP Netweaver 7.02 Servers and the SharePoint 2010 Farm. The SAP Netweaver 7.02 Servers work as the gateway to the SAP backend systems and can actually support any version of SAP system. The Duet Enterprise is built on SharePoint 2010 and makes use of the Business Connectivity Services that enables the full Create Read Update Delete (CRUD) operations to data residing on external systems. The options for user experience are the same for any SharePoint 2010 solution; Mobile, Office or a Browser. There are no SAP or Duet Enterprise clients.

    Ok, but what does this do?

    This provides organisations with a jointly supported, (Microsoft and SAP) technical approach and OOB tools to address the common challenges faced when integrating SAP systems. Such as;

    Authentication: Using Claims-Based authentication to ensure SSO is available between SharePoint 2010 sites and SAP backend systems.

    Authorisation: Able to secure objects in SharePoint using SAP Roles, supporting concepts such a Manager seeing more than an employee.

    Monitoring: Duet Enterprise SharePoint Health Rules are provided to proactively monitor your Duet Enterprise solutions. Also available are Duet Enterprise Management Pack for System Centre Operations Manager, a quick video here shows it in action.

    It is also worth taking a look at the Duet Enterprise Architecture for more information.

    These are just some of the behind the scenes aspects of Duet Enterprise that essentially provide a jump start in any SAP/SharePoint integration project. However, there is also another layer that exemplifies how data from SAP can be surfaced through SharePoint 2010 and Office 2010. (Office 2010 isn’t a pre-req but certainly a better experience!).

    Currently included are;

    1. Duet Enterprise Workflow
  • Duet Enterprise Profile

  • Duet Enterprise Collaboration

  • Duet Enterprise Sites

  • Duet Enterprise Reporting

  • All of this provide a way to jump start development of a solution that can be used OOB or easily extended to suit specific requirements. The result here is that organisations are able to surface data typically locked away in backend SAP systems to a much wider audience and in an inexpensive way. I may blog on what each of these are in the future and how to extend them.

    Once deployed, IT departments have a “Foundation” in place to support future extensibility. Once users start seeing what is possible I fully expect the flood gates to open with requests for composite applications.

    What else do you get?

    1. Long-term product roadmap commitment from SAP and Microsoft
    2. Platform for innovation: broad ecosystem of ISV and service partners offering Business Pack Solutions
    3. Partner solutions/apps certified by SAP

    New “Filter My ListView” SharePoint Web Part and App now available for SP 2010 & 2013 On-premise and Office 365!!

    What is it?

    The “Filter My ListView” Web Part / App is a SharePoint WebPart enables you to create custom filter to find information in SharePoint list or document library.

    my listview

    Why do you need it?

    In working with SharePoint and with large lists or document libraries containing 100K+ items, users frequently found that there is no usable tool for filtering data.

    SharePoint let us create views, but their functionality doesn’t meet the requirements of users. And most popular reason is this: list view is static and users can’t modify it on the fly.

    On the other hand the “Filter My List” web part may filter data representing in the current view’s columns. But user can’t apply multiple filter to list and others (date range, filter criteria, …).

    All this leads to the fact that we have to have custom solution this solving these limitations.

    Usage

    The “Filter My ListView” Web Part / App is a simple to use SharePoint list view filter. It enables your to create custom filter form, composed from all list fields (not only fields containing in current list view).

    Supported field types

    • Simple text

    jQuery UI is used for using autocomplete!

    • Text with options enables select filtering type

    Text with filtering options

    • Date

    • DateRange

    • Boolean

    • DropDown list represents unique values of field

    • User or Group
    • Taxonomy Term Picker

    • Multi-select CheckBoxList

    The “Filter My ListView” Web Part / App builds a filter form using different types of controls:

    • TextBox. “Contains” criteria filter
    • TextBox with autocomplete
    • TextBox with options. Allows user to choose filter criteria that can be one of these:
      • Equals
      • Not equals
      • Contains
      • Begins with
    • Date
    • Date Range
    • DropDownList
    • DropDownList with multiple selection
    • People picker
    • MetaData picker

    Relation between field type and supported filter types is represented in this matrix:

    Contact me now through my blog, https://sharepointsamurai.wordpress.com or at tomas.floyd@outlook.com for this and more SharePoint and Office 365 custom developed Web Parts and Apps

    Using jQuery Mobile and ASP.NET to Pass Data from one Page to another

    ASP.NET developers working on either Web Forms or ASP.NET MVC can integrate jQuery Mobile into their Web sites to create rich mobile Web apps. jQuery Mobile is a lightweight JavaScript framework for developing cross-platform mobile/device Web applications.

    In mobile application development, you as a developer must think about how to utilize available space in the best possible manner. For this purpose sometimes the UI needs to be divided in separate pages. In such cases, you may need to transfer value(s) entered in one page, on the second page for the further processing.

    Consider a scenario where the end user is asked to select the Product Category on Page 1 and based upon that, Page 2 displays products under that category. To get this done, values must be passed from page 1 to page 2.

    In jQuery Mobile, $.mobile object provides the changePage() method which accepts the page url as a first parameter and the values to be transferred as a second parameter using JSON based data. In this article, we will see, how values are passed from one page to another in an ASP.NET application.

    Step 1: Open Visual Studio and create a new ASP.NET empty web application, name it as ‘ASPNET_Mobile_PassingValuesAcrossPages’. In this application, using the NuGet Package Manager, add the latest jQuery and jQuery Mobile libraries.

    Step 2: To the project, add a new class file, name it as ‘ModelClasses.cs’ and add the following classes in it:

    public class Category {     public int CategoryId { get; set; }     public string CategoryName { get; set; } }

    public class Product {     public int ProductId { get; set; }     public string ProductName { get; set; }     public int Price { get; set; }     public int CategoryId { get; set; } }

    public class CategoryDataStore : List<Category> {     public CategoryDataStore()     {         Add(new Category() { CategoryId = 1, CategoryName = “Food Items” });         Add(new Category() { CategoryId = 2, CategoryName = “Home Appliances” });         Add(new Category() { CategoryId = 3, CategoryName = “Electronics” });         Add(new Category() { CategoryId = 4, CategoryName = “Wear” });     } }

    public class CategoryDataSource {     public List<Category> GetCategories()     {         return new CategoryDataStore();     } }

    public class ProductDataStore : List<Product> {     public ProductDataStore()     {         Add(new Product() { ProductId = 111, ProductName = “Apples”, Price = 300, CategoryId = 1 });         Add(new Product() { ProductId = 112, ProductName = “Date”,   Price = 600, CategoryId = 1 });         Add(new Product() { ProductId = 113, ProductName = “Fridge”, Price = 34000, CategoryId = 2 });         Add(new Product() { ProductId = 114, ProductName = “T.V.”,   Price = 30000, CategoryId = 2 });         Add(new Product() { ProductId = 115, ProductName = “Laptop”, Price = 72000, CategoryId = 3 });         Add(new Product() { ProductId = 116, ProductName = “Mobile”, Price = 40000, CategoryId = 3 });         Add(new Product() { ProductId = 117, ProductName = “T-Shirt”, Price = 800, CategoryId = 4 });         Add(new Product() { ProductId = 118, ProductName = “Jeans”, Price = 700, CategoryId = 4 });     } }

    public class ProductDataSource {     public List<Product> GetProductsByCategoryId(int catId)     {         var Products = from p in new ProductDataStore()                        where p.CategoryId == catId                        select p;

            return Products.ToList();     } }

    The above code provides Category and Product entities with data stored in it. The Product is a child of the Category class.

    Step 3: In the project, add a WEB API controller class with the following code:

    public class CategoryProductController : ApiController {     CategoryDataSource objCatDs;     ProductDataSource objPrdDs;

        public CategoryProductController()     {         objCatDs = new CategoryDataSource();         objPrdDs = new ProductDataSource();     }

        public IEnumerable<Category> Get()     {         return objCatDs.GetCategories();     }

        public IEnumerable<Product> Get(int id)     {         return objPrdDs.GetProductsByCategoryId(id);     } }

    Step 4: To define routing for the WEB API class, add a new Global Application Class (Global.asax) in the project and add the following code in the Application_Start method:

    protected void Application_Start(object sender, EventArgs e) {     RouteTable.Routes.MapHttpRoute(     name: “DefaultApi”,     routeTemplate: “api/{controller}/{id}”,     defaults: new { id = System.Web.Http.RouteParameter.Optional }     ); }

    Step 5: Add two HTML pages, name them as Page_Category.html and Page_Products.html.

    Task 6: In both the HTML pages, add the jQuery and jQuery Mobile references:

    <link href=”Content/jquery.mobile.structure-1.3.1.min.css” rel=”stylesheet” /> <link href=”Content/jquery.mobile.theme-1.3.1.min.css” rel=”stylesheet” /> <script src=”Scripts/jquery-2.0.2.min.js”></script> <script src=”Scripts/jquery.mobile-1.3.1.min.js”></script>

    Add the following HTML in the Page_Category.html page:

    <div data-role=”page” id=”catpage”>     <div data-role=”header”>         <h1>Select Category from List</h1>     </div>     <div data-role=”content”>         <div>             <select  id=”lstcat” data-native-menu=”false”>                 <option>List of Categories…..</option>             </select>         </div>         <br />         <br />         <input type=”button” data-icon=”search”            value=”Get Products”  data-inline=”true” id=”btngetproducts” />     </div> </div>

    The above code defines a page using <div>  whose data-role attribute is set to page. This page contains a <select> for displaying list of categories in it and an <input> button with a click event on which the control will be transmitted to Page_Products.html page.

    In the age_Category.html add the following script. The method loadlistview will make a call to WEB API and retrieve categories. These categories will be added into the <select> with id as lstcat. This method will be executed when the pageinit event is executed. On the click event of the button, the Category Value and Name will be passed to the Page_Products.html.

    <script type=”text/javascript”> $(document).on(“pageinit”, “#catpage”, function () {

    //Pass the data to the other Page $(“#btngetproducts”).on(‘click’, function () {     var categoryId = $(“#lstcat”).val();     var categoryName = $(“#lstcat”).find(“:selected”).text();     $.mobile.changePage(“Page_Products.html”, { data: { “catid”: categoryId ,”catname”:categoryName}}); });

    loadlistview();

    ///Function to load all categories function loadlistview() {         $.ajax({         type: “GET”,         url: “/api/CategoryProduct”,         contentType: “application/json; charset=utf-8”,         dataType: “json”     }).done(function (data) {         //Convert JSON into Array         var array = $.map(data, function (i, idx) {             return [[i.CategoryId, i.CategoryName]];         });

            //Add each Category Name in the ListView         var res = “”;         $.each(array, function (idx, cat) {             res += ‘<option value=”‘ + cat[0] + ‘”>’ + cat[1] + ‘</option>’;         });         $(“#lstcat”).append(res);         $(“#lstcat”).trigger(“change”);

        }).fail(function (err) {         alert(“Error ” + err.status);     }); } }); </script>

    Just look at this piece of code:

    $.mobile.changePage(“Page_Products.html”, { data: { “catid”: categoryId ,”catname”:categoryName}});

    This code is responsible for passing the CategoryId and Name using JSON expression to the Page_Products.html.

    Step 7: Open Page_Products.html and add the following HTML markup:

    <div data-role=”page” id=”prodpage” data-add-back-btn=”true”> <div data-role=”header”>         <h1>Products in Category</h1>         <span id=”catName”></span>     </div>          <table style=”border:double” data-role=”table” id=”tblprd”>         <thead>             <tr>                 <td style=’width:100px’>                     ProductId                 </td>                 <td style=’width:100px’>                     ProductName                 </td>                 <td style=’width:100px’>                     Price                 </td>             </tr>         </thead>         <tbody>         </tbody>     </table> </div>

    Here the <div> with id as prodpage is set with the attribute as data-role=page. The attribute data-add-back-btn=true, will add the BACK button on the page. On click of this button, we can move back to the Page_Category.html. This is the default style set in jQuery Mobile.

    Now here the important thing is how to read values which are send from Page_Category.html in the URL, on the Page_Product.html page. Unlike ASP.NET, in jQuery there is no simple way to read the values passed from one page to another. To read these values we will create a readUrlHelper helper method. This method reads the URL and using JavaScript string functions, reads the Key/Value pairs passed from the Page_Category.html. On the Page_Products.html, add this method using <script> inside the page created using <div> with id as prodpage.

    //This helper function will read the URL as //string and provides values for parameters //in the URL function readUrlHelper(pageurl, queryParamName) {     var queryParamValue = “”;

        //Get the Total URL Length     var stringLength = pageurl.length;

        //Get the Index of ? in URL     var indexOfQM = pageurl.indexOf(“?”);

        //Get the Lenght of the String after ?     var stringAfterQM = stringLength – indexofQM;

        //Get the remaining string after ?     var strBeforeQM = pageurl.substr(indexofQM + 1, stringAfterQM);

        //Split the remaining String based upon & sign     var data = strBeforeQM.split(“&”);

        //Iterate through the array of strings after split     $.each(data, function (idx, val) {         //Split the string based upon = sign         var queryExpression = val.split(“=”);

            if (queryExpression[0] == queryParamName)         {             queryParamValue = queryExpression[1];             //If the Query String has Data in Concatination then replace the             //’+’ by blank space ‘ ‘             queryParamValue =  queryParamValue.replace(‘+’, ‘ ‘);         }     });

        return queryParamValue

    }

    To retrieve the Products based upon the CategoryId, add a new method loadproducts inside the prodpage. This methods makes a call to the WEB API service and passes the CategoryId. This method also makes a call to the generatetable helper method to generate HTML table to display products in it based upon the JSON data received by the loadproducts method:

    //Method to make call to WEB API and retrieve //Products based upon the CategoryId. function loadproducts(id) {

        $.ajax({         type: “GET”,         url: “/api/CategoryProduct/” + id,         contentType: “application/json; charset=utf-8″,         dataType: “json”     }).done(function (data) {         generatetable(data);     }).fail(function (err) { }); } //Method to generate HTML table rows from the JSON data function generatetable(data) {

        //Convert JSON into Array     var array = $.map(data, function (i, idx) {         return [[i.ProductId, i.ProductName,i.Price]];     });     var tblbody = $(“#tblprd > tbody”);

        var tblhtml = “”;     //Generate the Table for each Row     for (var i = 0; i < array.length; i++) {         tblhtml = tblhtml + “<tr>”;         for (var j = 0; j < array[i].length; j++) {         tblhtml = tblhtml + “<td style=’width:100px’>” + array[i][j] + “</td>”;        }         tblhtml = tblhtml + “</tr>”;     }

        tblbody.append(tblhtml);     $(“#tblprd”).table(“refresh”);

    }

    Now inside the prodpage subscribe to the pageshow event which will make a call to the readUrlHelper method and loadproducts method

    var catId; $(“#prodpage”).on(“pageshow”, function (e) {     var pgurl = $(“#prodpage”).attr(“data-url”);     //Read the value for catid passed from the Page_Category.html     catId = readUrlHelper(pgurl, “catid”);     //Read the value for catname passed from the Page_Category.html    var catName = readUrlHelper(pgurl,”catname”);

       $(“#catName”).text(catName);

       loadproducts(catId);

    });

    The above code read the CategoryId and CategoryName passed by the Page_Category.html and based upon it the Products are displayed on the page.

    Make the Page_Category.html as a startup page and run the application (IE9, FireFox or Chrome) or in Opera Mobile Emulaor.

    On Page-Category.html select Categories as below:

    jquery-mob-categories

    When you click on Get Products , you will be navigated to the Page_Products.html with the following url:

    http://localhost:5506/Page_Products.html?catid=1&catname=Food+Items

    The URL contains key and value for CategoryId (catid=1) and CategoryName (catname=Food+Items)

    In the Page_Products.html, the readUrlHelper method will read CategoryId and CategoryName and based upon this data, Products will be displayed as below:

    jquery-transfer- values

    Conclusion: In a Mobile application it is very important for the developer to manage the UI and take care of data communication across these pages. If the data passed is complex (more than one key/value pair) then the URL must be read very carefully. In this article, we saw how we can use jQuery Mobile and ASP.NET to transfer values from one page to another.

    Create a new Search Service Application in SharePoint 2013 using PowerShell

    The search architecture in SharePoint 2013 has changed quite a bit when compared to SharePoint 2010. In fact the Search Service in SharePoint 2013 is completely overhauled. It is a combination of FAST Search and SharePoint Search components.

    apxvsdik

    As you can see the query and crawl topologies are merged into a single topology, simply called “Search topology”. Provisioning of the search service application creates 4 databases:

    • SP2013_Enterprise_Search – This is a search administration database. It contains configuration and topology information
    • SP2013_Enterprise_Search_AnalyticsReportingStore – This database stores the result of usage analysis
    • SP2013_Enterprise_Search_CrawlStore – The crawl database contains detailed tracking and historical information about crawled items
    • SP2013_Enterprise_Search_LinksStore – Stores the information extracted by the content processing component and also stores click-through information

    # Create a new Search Service Application in SharePoint 2013

    Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue

    Settings    $IndexLocation = “C:\Data\Search15Index” #Location must be empty, will be deleted during the process!     $SearchAppPoolName = “Search App Pool”     $SearchAppPoolAccountName = “Contoso\administrator”     $SearchServerName = (Get-ChildItem env:computername).value     $SearchServiceName = “Search15”     $SearchServiceProxyName = “Search15 Proxy”     $DatabaseName = “Search15_ADminDB”     Write-Host -ForegroundColor Yellow “Checking if Search Application Pool exists”     $SPAppPool = Get-SPServiceApplicationPool -Identity $SearchAppPoolName -ErrorAction SilentlyContinue

    if (!$SPAppPool)    {         Write-Host -ForegroundColor Green “Creating Search Application Pool”         $spAppPool = New-SPServiceApplicationPool -Name $SearchAppPoolName -Account $SearchAppPoolAccountName -Verbose     }

    Start Services search service instance    Write-host “Start Search Service instances….”     Start-SPEnterpriseSearchServiceInstance $SearchServerName -ErrorAction SilentlyContinue     Start-SPEnterpriseSearchQueryAndSiteSettingsServiceInstance $SearchServerName -ErrorAction SilentlyContinue

    Write-Host -ForegroundColor Yellow “Checking if Search Service Application exists”    $ServiceApplication = Get-SPEnterpriseSearchServiceApplication -Identity $SearchServiceName -ErrorAction SilentlyContinue

    if (!$ServiceApplication)    {         Write-Host -ForegroundColor Green “Creating Search Service Application”         $ServiceApplication = New-SPEnterpriseSearchServiceApplication -Partitioned -Name $SearchServiceName -ApplicationPool $spAppPool.Name  -DatabaseName $DatabaseName     }

    Write-Host -ForegroundColor Yellow “Checking if Search Service Application Proxy exists”    $Proxy = Get-SPEnterpriseSearchServiceApplicationProxy -Identity $SearchServiceProxyName -ErrorAction SilentlyContinue

    if (!$Proxy)    {         Write-Host -ForegroundColor Green “Creating Search Service Application Proxy”         New-SPEnterpriseSearchServiceApplicationProxy -Partitioned -Name $SearchServiceProxyName -SearchApplication $ServiceApplication     }

    $ServiceApplication.ActiveTopology     Write-Host $ServiceApplication.ActiveTopology

    Clone the default Topology (which is empty) and create a new one and then activate it    Write-Host “Configuring Search Component Topology….”     $clone = $ServiceApplication.ActiveTopology.Clone()     $SSI = Get-SPEnterpriseSearchServiceInstance -local     New-SPEnterpriseSearchAdminComponent –SearchTopology $clone -SearchServiceInstance $SSI     New-SPEnterpriseSearchContentProcessingComponent –SearchTopology $clone -SearchServiceInstance $SSI     New-SPEnterpriseSearchAnalyticsProcessingComponent –SearchTopology $clone -SearchServiceInstance $SSI     New-SPEnterpriseSearchCrawlComponent –SearchTopology $clone -SearchServiceInstance $SSI

    Remove-Item -Recurse -Force -LiteralPath $IndexLocation -ErrorAction SilentlyContinue    mkdir -Path $IndexLocation -Force

    New-SPEnterpriseSearchIndexComponent –SearchTopology $clone -SearchServiceInstance $SSI -RootDirectory $IndexLocation    New-SPEnterpriseSearchQueryProcessingComponent –SearchTopology $clone -SearchServiceInstance $SSI     $clone.Activate()

    Write-host “Your search service application $SearchServiceName is now ready”

    Update

    To configure failover server(s) for Search DBs, use the following PowerShell:

    Thanks to Marcel Jeanneau for sharing this!

    #Admin Database   $ssa = Get-SPEnterpriseSearchServiceApplication “Search Service Application”    Set-SPEnterpriseSearchServiceApplication –Identity $ssa –FailoverDatabaseServer <failoverserveralias\instance>

    #Crawl Database   $CrawlDatabase0 = ([array]($ssa | Get-SPEnterpriseSearchCrawlDatabase))[0]    Set-SPEnterpriseSearchCrawlDatabase -Identity $CrawlDatabase0 -SearchApplication $ssa -FailoverDatabaseServer <failoverserveralias\instance>

    #Links Database   $LinksDatabase0 = ([array]($ssa | Get-SPEnterpriseSearchLinksDatabase))[0]     Set-SPEnterpriseSearchLinksDatabase -Identity $LinksDatabase0 -SearchApplication $ssa -FailoverDatabaseServer <failoverserveralias\instance>

    #Analytics database   $AnalyticsDB = Get-SPDatabase –Identity     $AnalyticsDB.AddFailOverInstance(“failover alias\instance”)    $AnalyticsDB.Update()

    You can always change the default content access account using the following command:

    $password = Read-Host –AsSecureString**********Set-SPEnterpriseSearchService -id “SSA name” –DefaultContentAccessAccountName Contoso\account –DefaultContentAccessAccountPassword $password

    Look out for my Powershell Web Part and Google Analytics Web Part and App that is under Development and available soon for purchase!!

    Image

    How to successfully make the 2 worlds of SAP and SharePoint meet

    In essence, a ‘Duet Enterprise custom scenario’ development project is not different from other application development projects.

    The customer – either the real end-user, or product management for package software -, has an idea about what to achieve and functional + non-functional (quality) requirements to underline the idea.

    Often this idea is still vague, and the requirement set far from complete and on aspects also contradictionary.

    The waterfall approach to yet start with application development (specify, build, test) typically ends up with wrong or failed end-results: the application does not live up to the real customer needs and expectations, and the budget is largely overrun as consequence of responding on the requirements changes.

    The agile movement came up to improve on this bad practice. Central is the structural involvement of the user (representative) during the full course of the development project.

    Continuous, the user is in the lead about the specifics and the behaviour of the application functionality to be delivered.

    In Scrum, this responsibility is formalized within the product owner role.

    Application mockup
    A proven approach to enable the product owner to validate, correct, and ultimately confirm the application functionality is by make it experience, ‘touchable’.

    Provide the product owner with a realistic impression of the application behavour + feature(s) that the project will build: a clickable prototype / mockup.

    Based on multiple positive experiences (also within non-Duet Enterprise project, e.g. BI Dashboards), we strongly believe in the added value and power of application mockup. We regard the role of UX / UID expert crucial for customer involvement, and thus for delivering the correct application functionality.

    Meeting of 2 worlds

    Yet, a Duet Enterprise project also is very much different from ‘normal’ projects.
    The cause lies with the meeting of 2 very different environments: IT technology, platform concepts, and human nature.

    The tradional SAP environment is transaction oriented, with focus on structural business process executions.

    The Microsoft environment, and in particular SharePoint, is far more loose. Ad-hoc processes are the norm, the individual user decides how to perform a specific (business) action. When these 2 worlds + thinking come together in a Duet Enterprise project, the first outcomes are often dramatic.

    SAP versus SharePoint architects,
    business analysts and developers do not understand each other (concepts).
    Both ‘parties’ disagree on how and where (that is, on SAP or on SharePoint side) custom development should be done.

    The personal preference on how to achieve the SAP+SharePoint integration is influenced by one’s own technology frames and familiair concepts.

    This has the risk that the user is forgotten: the user interests must be central, deliver an optimal user experience.

    Align through Blueprint Duet Enterprise scenario’s

    To break through this, we utilize in our Duet Enterprise development process a blueprint step with the focus exclusive on the Duet Enterprise specific scenarios.

    This blueprint is derived and agreed by a mixed group of SAP and SharePoint representatives. The goal of this step, and its endresult, is to come to a mutual understanding of the solution direction for each of the Duet Enterprise scenario’s.

    Self-employed constraints are to stick as close to the standards of both the SAP business package and SharePoint platform. Where feasible, we do allow custom development.

    Here we apply Duet Enterprise integration patterns that we have identified and defined: Consumer-oriented, Provider-oriented, and XML-Funneling pattern.

    Of course, the discussion and preferences about where to make custom adjustments still manifest within the Duet Enterprise blueprinting step.

    However, as it now has the full focus of all involved, mutual agreement is earlier and more effective reached. This is the so-called ‘workshop effect’.

    Also, in this focused approach, comparable application situations are easier to recognize. And where so identified, the earlier agreed solution approach can be reused.

    Starting point for the blueprint derivation is the User Interface Design as validated and confirmed by the product owner.
    In the blueprint we derive and define per identified Duet Enterprise scenario the following pieces:

    Global description of the scenario: screenshot of the mockup screen and textual explanation. The purpose is to facilitate on high level a common understanding of the functionality in the scenario.

    There is no intend to compete with or replace use cases.
    Derivation, explanation and justification of solution direction. At minimal explanation of the proposed solution direction.

    But whenever applicable, also the rationale of why an alternative solution direction is dropped.

    SAP building blocks: standard RFC’s, BAPIs; identification of needed custom SAP building blocks

    
    Blueprint validation by Design Authority / Authorities

    Justified mantra nowadays within SAP and Microsoft IT departments is to avoid unnecessary or otherwise avoidable custom development. Instead stick close to the standard delivered functionality of both landscapes.

    It is also evident that fully excluding custom development is not feasible when you are developing company custom scenario’s.

    We facilitate the consistency and continuity in the Duet Enterprise solution directions by confirming the blueprint to Duet Enterprise Rules, Conventions and Guidelines.

    The Design Authorities of both SAP and Microsoft IT departments validate the blueprint against these agreements. Derivation can be allowed, but requires the formal approved of the respective Design Authority (SAP, Microsoft, and in some cases both).

    SAP + SharePoint realizations based on the blueprint

    With the blueprint accepted and delivered, from here on SAP and SharePoint teams in the project can start with their own development activities for the Duet Enterprise scenario’s.

    Since representatives of the teams were involved in deriving the blueprint, the concrete software design + development can be done relatively independent of the other party.

    Of course the data contract details of each SAP / Duet Enterprise interface are input for the SharePoint consumption.

    Regular sharing the SAP design with the SharePoint developers can achieve this.

    Contrary to popular beliefs – The SAME skills are used to develop SharePoint On-premise and SharePoint Online Apps

    Taking an on-premise application and deploying it on a Windows Azure Virtual Machine should be straight forward. The majority of modifications required, include changing configurations in order to accommodate differences in the Virtual Machine’s configuration.

    Going to the cloud on a single Virtual Machine is like crossing the ocean on a row boat. You’ll probably get there, but I can’t guarantee anything.

    row boat

    Once you’ve deployed your application to the cloud, things get interesting because you have opportunities that allow you to solidify your application.

    If you are building your application on the Cloud instead of building your application for the Cloud, you’re doing it wrong!

    Taking advantage of services offered on Windows Azure allows you concentrate on creating value for your business. The Windows Azure teams takes care of lots of boring stuff like disaster recovery. To me, build applications for the cloud is equivalent of going from a row boat to a fleet of navy destroyers. (Alright, I may be a little overconfident, but you get the picture.)

    going to war

    Preparing applications to go web scale isn’t a trivial task. Each application is unique and requires different services. You will need to go over your objectives and build accordingly.

    Although the planning phase isn’t trivial, augmenting your applications with cloud services isn’t as complicated as some might think. The Windows Azure teams provide an amazing amount of support materials like hands on labs, tutorials and a rich collection of documentation.

    Architectural Considerations

    Throughout the planning phase of an application on the cloud there are a few architectural strategies that require some extra attention.

    Cloud Design Patterns

    While designing or augmenting cloud based applications, the following patterns should be consider. While this list isn’t complete, it should be used as a starting point.

    For more patterns, take a look at the resources listed at the bottom of this post.

    • Cache-aside Pattern – This is a common technique that we can use to improve the performance and scalability of a cloud solution by temporarily copying frequently accessed data to fast storage located close to the application.
    • Queue-based Load Leveling Pattern – Cloud solutions are submitted to very unpredictable loads and require protection against their own success. By placing queues between clients and the workers who execute tasks, you are protecting yourself against spikes.
    • Competing Consumers Pattern – Enable multiple Cloud Service instances to retrieve messages from the same source.
    • Compute Resource Consolidation Pattern – Consolidate multiple tasks or operations into a single computational unit (Roles, Virtual Machines, Web Sites).
    • Eventual Consistency – Cloud solutions use data that’s dispersed and duplicated across data stores, managing and maintaining data consistency can become a major bottleneck.
    • Leader Election Pattern – A great way to coordinate actions being performed by a group of Cloud Service instances is to elect a leader that can act as the coordinator. This is extremely useful for maintenance tasks and singleton tasks that need fallbacks.
    • Materialized View Pattern – This has got to be one of my favorite cloud patterns. The solutions’ data may not be formatted in a way that favors our query requirements. In order to optimize our queries, it may be desirable to generate pre-populated views whose shapes correspond with our requirements.
    • Pipes and Filters Pattern – We should strive to decompose complex tasks into a series of discrete elements that can be reused.

    Succeeding on the cloud is all about architecture, using the right service for the right reasons. This can be a challenge because cloud platforms are continuously evolving. Windows Azure is currently (Jan 2014) on a 3 week release cycle and it can be quite a challenge for all of us to keep up. Fortunately there are blogs, podcasts and online courses that help us along the way.

    Learn More

    Troubleshooting WCF Services during Runtime with WMI

    One of the coolest features of WCF when it comes to troubleshooting is the WCF message logging and tracing feature. With message logging you can write all the messages your WCF service receives and returns to a log file. With tracing, you can log any trace message emitted by the WCF infrastructure, as well as traces emitted from your service code.

    The issue with message logs and tracing is that you usually turn them off in production, or at the very least, reduce the amount of data they output, mainly to conserve disk space and reduce the latency involved in writing the logs to the disk. Usually this isn’t a problem, until you find yourself in the need to turn them back on, for example when you detect an issue with your service, and you need the log files to track the origin of the problem.

    Unfortunately, changing the configuration of your service requires resetting it, which might result in loss of data, your service becoming unavailable for a couple of seconds, and possibly for the problem to be resolved on its own, if the reason for the strange behavior was due to a faulty state of the service.

    There is however a way to change the logging configuration of the service during runtime, without needing to restart the service with the help of the Windows Management Instrumentation (WMI) environment.

    In short, WMI provides you with a way to view information about running services in your network. You can view a service’s process information, service information, endpoint configuration, and even change some of the service’s configuration in runtime, without needing to restart the service.

    Little has been written about the WMI support in WCF. The basics are documented on MSDN, and contain instructions on what you need to set in your configuration to make the WMI provider available. The MSDN article also provides a link to download the WMI Administrative Tools which you can use to manage services with WMI. However that tool requires some work on your end before getting you to the configuration you need to change, in addition to it requiring you to run IE as an administrator with backwards compatibility set to IE 9, which makes the entire process a bit tedious. Instead, I found it easier to use PowerShell to write six lines of script which do the job.

    The following steps demonstrate how to create a WCF service with minimal message logging and tracing configuration, start it, test it, and then use PowerShell with WMI to change the logging configuration in runtime.

    1. Open Visual Studio 2012 and create a new project using the WCF Service Application template.

    After the project is created, the service code is shown. Notice that in the GetDataUsingDataContract method, an exception is thrown when the composite parameter is null.

    2. In Solution Explorer, right-click the Web.config file and then click Edit WCF Configuration.

    3.In the Service Configuration Editor window, click Diagnostics, enable the WMI Provider, MessageLogging and Tracing.

    By default, enabling message logging will enable logging of all the message from the transport layer and any malformed message. Enabling tracing will log all activities and any trace message with severity Warning and up (Warning, Error, and Critical). Although those settings are useful during development, in production we probably want to change them so we will get smaller log files with only the most relevant information.

    4. Under MessageLogging, click the link next to Log Level, uncheck Transport Messages, and then click OK.

    The above setting will only log malformed messages, which are messages that do not fit any of the service’s endpoints, and are therefore rejected by the service.

    5. Under Tracing, click the link next to Trace Level, uncheck Activity Tracing, and then click OK.

    The above setting will prevent every operation from being logged, unless those that output a trace message of warning and up. You can read more about the different types of trace messages on MSDN. http://msdn.microsoft.com/en-us/library/ms733025(v=vs.110).aspx

    By default, message logging only logs the headers of a message. To also log the body of a message, we need to change the message logging configuration. Unfortunately, we cannot change that setting in runtime with WMI, so we will set it now.

    6. In the configuration tree, expand Diagnostics, click Message Logging, and set the LogEntireMessage property to True.

    7. Press Ctrl+S to save the changes, close the configuration editor window, and return to Visual Studio.

    The trace listener we are using buffers the output and will only write to the log files when the buffer is full. Since this is a demonstration, we would like to see the output immediately, and therefore we need to change this behavior.

    8. In Solution Explorer, open the Web.config file, locate the <system.diagnostics> section, and place the following xml before the </system.diagnostics> closing tag: <trace autoflush=”true”/>

    Now let us run the service, test it, and check the created log files.

    9. In Solution Explorer, click Service1.svc, and then press Ctrl+F5 to start the WCF Test Client without debugging.

    10. In the WCF Test Client window, double-click the GetDataUsingDataContract node, and then click Invoke. Repeat this step 2-3 times.

    Note: If a Security Warning dialog appears, click OK.

    11. In the Request area, open the drop-down next to the composite parameter, and set it to (null).

    12. Click Invoke and wait for the exception to show. Notice that the exception is general (“The server was unable to process the request due to an internal error.”) and does not provide any meaningful information about the true exception. Click Close to close the dialog.

    Let us now check the content of the two log files. We should be able to see the traced exception, but the message wouldn’t have been logged.

    13. Keep the WCF Test Client tool running and return to Visual Studio. Right-click the project and then click Open Folder in File Explorer.

    14. n the File Explorer window, double-click the web_tracelog.svclog file. The file will open in the Service Trace Viewer tool.

    15. n the Service Trace Viewer tool, click the 000000000000 activity in red, and then click the row starting with “Handling an exception”. In the Formatted tab, scroll down to view the exception information.

    As you can see in the above screenshot, the trace file contains the entire exception information, including the message, and the stack trace.

    Note: The configuration evaluation warning message which appears first on the list means that the service we are hosting does not have any specific configuration, and therefore is using a set of default endpoints. The two exceptions that follow are ones thrown by WCF after receiving two requests that did not match any endpoint. Those requests originated from the WCF Test Client tool when it attempted to locate the service’s metadata.

    Next, we want to verify no message was logged for the above argument exception.

    16. Return to the File Explorer window, select the web_messages.svclog file, and drag it to the Service Trace Viewer tool. Drop the file anywhere in the tool.

    There are now two new rows for the malformed messages sent by the WCF Test Client metadata fetching. There is no logged message for the faulted service operation.

    Imagine this is the state you now have in your production environment. You have a trace file that shows the service is experiencing problems, but you only see the exception. To properly debug such issues we need more information about the request itself, and any other information which might have been traced while processing the request.

    To get all that information, we need to turn on activity tracing and include messages from the transport level in our logs.

    If we open the Web.config file and change it manually, this would cause the Web application to restart, as discussed before. So instead, we will use WMI to change the configuration settings in runtime.

    17. Keep the Service Trace Viewer tool open, and open a PowerShell window as an Administrator.

    18. To get the WMI object for the service, type the following commands in the PowerShell window and press Enter: $wmiService = Get-WmiObject Service -filter “Name=’Service1′” -Namespace “root\servicemodel” -ComputerName localhost $processId = $wmiService.ProcessId $wmiAppDomain = Get-WmiObject AppDomainInfo -filter “ProcessId=$processId” -Namespace “root\servicemodel” -ComputerName localhost

    Note: The above script assumes the name of the service is ‘Service1’. If you have changed the name of the service class, change the script and run it again. If you want to change the configuration of a remote service, replace the localhost value in the ComputeName parameter with your server name.

    19. To turn on transport layer message logging, type the following command and press Enter: $wmiAppDomain.LogMessagesAtTransportLevel = $true

    20. To turn on activity tracing, type the following command and press Enter: $wmiAppDomain.TraceLevel = “Warning, ActivityTracing”

    21. Lastly, to save the changes you made to the service configuration, type the following command and press Enter: $wmiAppDomain.Put()

    22. Switch back to the WCF Test Client. In the Request area, open the drop-down next to the composite parameter, and set it to a new CompositeType object. Click Invoke 2-3 times to generate several successful calls to the service.

    23. In the Request area, open the drop-down next to the composite parameter, and set it to (null).

    24. Click Invoke and wait for the exception to show. Click Close to close the dialog.

    25. Switch back to the Service Trace Viewer tool and press F5 to refresh the activities list.

    You will notice that now there is a separate set of logs for each request handled by the service. You can read more on how to use the Service Trace Viewer tool to view traces and troubleshoot WCF services on MSDN. http://msdn.microsoft.com/en-us/library/aa751795(v=vs.110).aspx

    26. From the activity list, select the last row in red that starts with “Process action”.

    You will notice that now you can see the request message, the exception thrown in the service operation, and the response message, all in the same place. In addition, the set of traces is shown for each activity separately, making it easy to identify a specific request and its related traces.

    27. On the right pane, select the first “Message Log Trace” row, click the Message tab, and observe the body of the message.

    Now that we have the logged messages, we can select the request message and try to figure out the cause of the exception. As you can see, the composite parameter is empty (nil).

    If this was a production environment, you would probably want to restore the message logging and tracing to its original settings at this point. To do this, return to the PowerShell window, and re-run the command from before with their previous values:

    $wmiAppDomain.LogMessagesAtTransportLevel = $false

    $wmiAppDomain.TraceLevel = “Warning”

    $wmiAppDomain.Put()

    Before we conclude, now that your service is manageable through WMI, you can use other commands to get information about the service and its components. For example, the following command will return the service endpoints’ information: Get-WmiObject Endpoint -filter “ProcessId=$processId” -Namespace “root\servicemodel” -ComputerName localhost

    My Subscriptions Alert – New SharePoint 2013 Web Part available

    This web part shows the current logged in user what lists or libraries the user had subscribed for it.

    It will show gold bell icon beside the list name; which means you subscribed for this list.

    If there is silver colour bell beside the list name; this means you didn’t subscribe to this list. To subscribe, you can click the list name, it will popup the “New Alert” SharePoint OOB model dialog.

    Also in this dialog you will have many options of when to receive alerts and on what changes exactly.

    In the Tool Part of the Web Part, user can select the lists on current site that the user have permission to only to see it displayed in the web part to subscribe to it.

    Below is screen shot:

    Contact me at tomas.floyd@outlook.com for more information on this and other custom developed SharePoint Web Parts and Apps

    SharePoint Samurai