Category Archives: Best Practices and Patterns

How To : Implement Business Data Connectivity in SharePoint 2013

Business Data Connectivity

Business Connectivity Services is a centralized infrastructure in SharePoint 2013 and Office 2013 that supports integrated data solutions. With Business Connectivity Services, you can use SharePoint 2013 and Office 2013 clients as interfaces into data that doesn’t live in SharePoint 2013 itself. For example, this external data may be in a database and it is accessed by using the out-of-the-box Business Connectivity Services connector for that database.

DuetEnterpriseDesign[1]

Business Connectivity Services can also connect to data that is available through a web service, or data that is published as an OData source or many other types of external data. Business Connectivity Services does this through out-of-the box or custom connectors.

External Content Types in BCS

External content types are the core of BCS. They enable you to manage and reuse the metadata and behaviors of a business entity, such as Customer or Order, from a central location. They enable users to interact with that external data and process it in a more meaningful way.

For more information about using external content types in BCS, see External content types in SharePoint 2013.

How to Connect With SQL External Data Source

Open the SharePoint Designer 2013 and click on the open site icon:

Input the site URL which we need to open:

Enter your site credentials here:

Now we need to create the new external content type and here we have the options for changing the name of the content type and creating the connection for external data source:

And click on the hyperlink text “Click here to discover the external data source operations, now this window will open:

Click on the “Add Connection “button, we can create a new connection. Here we have the different options to select .NET Type, SQL Server, WCF Service.

Here we selected SQL server, now we need to provide the Server credentials:

Now, we can see all the tables and views from the database.

In this screen, we have the options for creating different types of operations against the database:

Click on the next button:

Parameters Configurations:

Options for Filter parameters Configuration:

Here we need to add new External List, Click on the “External List”:

Select the Site here and click ok button:

Enter the list name here and click ok button:

After that, refresh the SharePoint site, we can see the external list here and click on the list:

Here we have the error message “Access denied by Business Connectivity.”

Solution for this Error

SharePoint central admin, click on the Manage service application:

Click on the Business Data Connectivity Service:

Set the permission for this list:

Click ok after setting the permissions:

After that, refresh the site and hope this will work… but again, it has a problem. The error message like Login failed for user “NT AUTHORITY\ANONYMOUS LOGON”.

Solution for this Error

We need to edit the connection properties, the Authentication mode selects the value ‘BDC Identity’.

Then follow the below mentioned steps.

Open PowerShell and type the following lines:

$bdc = Get-SPServiceApplication | 
where {$_ -match “Business Data Connectivity Service”}
$bdc.RevertToSelfAllowed = $true
$bdc.Update();

Now it’s working fine.

And there is a chance for one more error like:

Database Connector has throttled the response.
The response from database contains more than '2000' rows. 
The maximum number of rows that can be read through Database Connector is '2000'. 
The limit can be changed via the 'Set-SPBusinessDataCatalogThrottleConfig' cmdlet

It’s because it depends on the number of recodes that exist in the table.

Solution for this Error

Follow the below steps:

Open PowerShell and type the following lines and execute:

$bcs = Get-SPServiceApplicationProxy | where{$_.GetType().FullName 
-eq (‘Microsoft.SharePoint.BusinessData.SharedService.’ + ‘BdcServiceApplicationProxy’)}
$BCSThrottle = Get-SPBusinessDataCatalogThrottleConfig -Scope database 
-ThrottleType items -ServiceApplicationProxy $bcs
Set-SPBusinessDataCatalogThrottleConfig -Identity $BCSThrottle -Maximum 1000000 -Default 20000

How To : Access SAP Business Data From Silverlight 4 Clients Using WCF RIA Services And LINQ to SAP

Introduction

The introduction of Microsoft’s WCF RIA Services for Silverlight 4 simplified very much the development process of N-tier business applications using Silverlight and ASP.NET. By using this new technology, we can also easily access and integrate SAP business data in Silverlight clients.

This article shows how to provide a SAP domain service as web service that will be consumed by a Silverlight client. The sample application will allow the user to query customer data. The service uses LINQ to SAP from Theobald Software to connect to a SAP R/3 system.

Project Setup

The first step in setting up a new Silverlight 4 project with WCF RIA Services is to create a solution using the Visual Studio template Silverlight Navigation Application:

Screenshot-01.png - Click to enlarge imageVisual Studio 2010 then asks you to create an additional web application, which hosts the Silverlight application. It’s important to select the checkbox Enable WCF RIA Services (see screenshot below):

SAP2Silverlight/Screenshot-02.pngAfter clicking the Ok button, Visual Studio generates a solution with two projects, one Silverlight 4 project and one ASP.NET project. In the next section, we will create the SAP data access layer using the LINQ to SAP designer.

LINQ to SAP

The LINQ to SAP provider and its Visual Studio 2010 designer offers a very handy way to design SAP interfaces visually. The designer will generate the code for the SAP data access layer automatically, similar to LINQ to SQL. The LINQ provider is part of the .NET library ERPConnect.net from Theobald Software. The company offers a demo version for download on its homepage.

The next step is to create the needed LINQ to SAP file by opening the Add New Item dialog:

Screenshot-03.png - Click to enlarge imageLINQ to SAP is internally called LINQ to ERP.

Clicking the Add button will create a new ERP file and opens the LINQ designer. Now, drag the Function object from the toolbox and drop it onto the designer surface. If you have not entered the SAP connection data so far, you are now asked to do so:

Screenshot-04.png - Click to enlarge imageEnter the connection data for your SAP R/3 system and then click the Ok button. Next, search for and select the SAP function module named SD_RFC_CUSTOMER_GET. The function module provides a list of customer data.

The RFC Function modules dialog opens and lets you define the necessary parameters:

SAP2Silverlight/Screenshot-05.pngIn the above function dialog, change the method name to GetCustomers and mark the Pass checkbox for theNAME1 parameter in the Exports tab. Also set the variable name to namePattern. On the Tables tab, mark the Return checkbox for the table parameter CUSTOMER_T and set the table and structure name to CustomerTable andCustomerRow:

SAP2Silverlight/Screenshot-06.pngAfter clicking the Ok button and saving the ERP file, the LINQ designer will generate a SAPContext class which contains a method called GetCustomers with an input parameter named namePattern. This method executes a search for SAP customer data allowing the user to enter a wildcard pattern. The method returns a table of customer data:

SAP2Silverlight/Screenshot-07.pngOn the LINQ designer level (click on the free part of the LINQ designer surface) property, Create Object Outside Of Context Class must be set to True:

Screenshot-08.png - Click to enlarge imageNow, we finally add a Customer class which we use in our SAP domain service later on. This class and its values will be transmitted to the Silverlight client by the WCF RIA Services. It’s important to set the Key attribute on the identifier fields for WCF RIA Services, otherwise the project will not compile:

Screenshot-09.png - Click to enlarge imageThat’s it! We now have our SAP data access layer ready to use and can start adding the domain service in the next section.

SAP Domain Service

The next step is to add the SAP domain service to our web project. A domain service is a specialized WCF service and is one of the core constructs of WCF RIA Services. The service exposes operations that can be called from the client generated code. On the client side, we use the domain context to access the domain service on the server side.

Add a new Domain Service Class and name it SAPService:

Screenshot-10.png - Click to enlarge imageIn the upcoming dialog, create an empty domain service class by just clicking the Ok button:

SAP2Silverlight/Screenshot-11.pngNext, we add the service operation GetCustomers to the SAP service with a name pattern parameter. The operation then returns a list of Customer objects. The Query attribute limits the result set to 200 entries.

The operation uses the visually designed SAP data access logic to retrieve the SAP customer data. First of all, an instance of the SAPContext class will be created using a connection string (see sample in code). For more details regarding the SAP connection string, see the ERPConnect.net manual.

The LINQ to SAP context class contains the GetCustomers method which we will call using the given namePatternparameter. Next, the operation creates an instance of the Customer class for each customer record returned by SAP.

The license code for the ERPConnect.net library is set in the constructor of our domain service class.

Screenshot-12.png - Click to enlarge imageThat’s all we need on the server side.

In the next section, we will implement the Silverlight client.

Silverlight Client

The implementation of the client side is straightforward. The home view contains a DataGrid control to display the list of customer data as well as a search area with TextBox and Button controls to allow users to enter name search pattern.

The click event handler of the load button, called OnLoadButtonClick, will execute the SAP service. The boilerplate code to access the web service was generated by WCF RIA Services in the subfolder Generated_Code in the Silverlight project.

First of all, an instance of the SAPContext will be created. Then, we load the query GetCustomersQuery and execute the service operation on the server side using WCF RIA Services. If the domain service returns an error, the callback anonymous method will mark the error as handled and display the error message.

If the execution of the service operation succeeded, the result set gets displayed in the DataGrid control.

Screenshot-13.png - Click to enlarge imageThe next screenshot shows the final result:

Screenshot-14.png - Click to enlarge imageThat’s it.

Summary

This article has shown how easily SAP customer data can be integrated within Silverlight clients using tools like WCF RIA Services and LINQ to SAP. It is quite simple to extend the SAP service to integrate all kinds of operations.

How To : Add a Promoted Links Web Part to SharePoint 2013 App Default page

This article helps you to add Promoted links web part to your default app page as the following figure:

 

To do this follow the following steps:
Open the shortcut menu for the project, and then choose Add, New Item
Add Picture Textbox, and two buttons to infopath form

 

In the Templates pane, choose the List template, and then choose the Add button :

Enter list name and choose the Create a non-customizable list based on an existing list type of option button, and then, in its list, choose Promoted links, and then choose the Finish button

Binding the CAPTCHA image
In Solution Explorer, under the list instance node, open the Elements.xml file.
Add the promoted links items as the following:
<?versionencodingutf-8?>
Elementsxmlnshttp://schemas.microsoft.com/sharepoint/
ListInstanceTitleMyPromotedLinks
OnQuickLaunch
TemplateType
FeatureId192efa95-e50c-475e-87ab-361cede5dd7f
Lists/MyPromotedLinks
DescriptionMy List Instance
FieldTitleTwitter</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/twitter.png
FieldDescriptionMuawiyah Shannak Twitter
FieldLinkLocationhttps://twitter.com/MuShannak</Field
FieldOrder</Field
</
FieldTitle</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/blogger.png
FieldDescriptionMuawiyah Shannak Blog
FieldLinkLocationhttp://mushannak.blogspot.com</Field
FieldOrder</Field
</
FieldTitleLinkedin</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/linkedin.png
FieldDescriptionMuawiyah Shannak Linkedin
FieldLinkLocationhttp://ae.linkedin.com/in/shannak</Field
FieldOrder</Field
</
</
</
<!–ListInstance
</Elements
In Solution Explorer, under the Pages node, open the Default.aspx file. Add following tags inside the PlaceHolderMain Place Holder:
WebPartPagesWebPartZone=”WebPartZone”runat=”server”FrameType=”None”>
WebPartPagesXsltListViewWebPart=”XsltListViewAppPromotedList”
runat=”server”ListUrl=”Lists/MyPromotedLinks”IsIncluded=”True”
NoDefaultStyle=”TRUE”Title=”Images used in switcher”
PageType=”PAGE_NORMALVIEW”Default=”False”
ViewContentTypeId=”0x”>
</WebPartPagesXsltListViewWebPart
</WebPartPagesWebPartZone

Deploy a solution and you will find nice promoted links web part in the app default page!

Free Code to Create Cross-site Publishing Apps for SharePoint Online

Cross-site publishing is one of the powerful new capabilities in SharePoint 2013.  It enables the separation of data entry from display and breaks down the container barriers that have traditionally existed in SharePoint (ex: rolling up information across site collections). 

 IC648720[1]

Cross-site publishing is delivered through search and a number of new features, including list/library catalogs, catalog connections, and the content search web part.  Unfortunately, SharePoint Online/Office 365 doesn’t currently support these features.  Until they are added to the service (possibly in a quarterly update), customers will be looking for alternatives to close the gap.  In this post, I will outline several alternatives for delivering cross-site and search-driven content in SharePoint Online and how to template these views for reuse

I’m a huge proponent of SharePoint Online.  After visiting several Microsoft data centers, I feel confident that Microsoft is better positioned to run SharePoint infrastructure than almost any organization in the world.  SharePoint Online has very close feature parity to SharePoint on-premise, with the primary gaps existing in cross-site publishing and advanced business intelligence.  Although these capabilities have acceptable alternatives in the cloud (as will be outlined in this post), organizations looking to maximize the cloud might consider SharePoint running in IaaS for immediate access to these features.

 

Apps for SharePoint

The new SharePoint app model is fully supported in SharePoint Online and can be used to deliver customizations to SharePoint using any web technology.  New SharePoint APIs can be used with the app model to deliver an experience similar to cross-site publishing.  In fact, the content search web part could be re-written for delivery through the app model as an “App Part” for SharePoint Online. 
Although the app model provides great flexibility and reuse, it does come with some drawbacks.  Because an app part is delivered through a glorified IFRAME, it would be challenging to navigate to a new page from within the app part.  A link within the app would only navigate within the IFRAME (not the parent of the IFRAME).  Secondly, there isn’t a great mechanism for templating a site to automatically leverage an app part on its page(s).  Apps do not work with site templates, so a site that contains an app cannot be saved as a template.  Apps can be “stapled” to sites, but the app installed event (which would be needed to add the app part to a page) only fires when the app is installed into the app catalog.

REST APIs and Script Editor

The script editor web part is a powerful new tool that can help deliver flexible customization into SharePoint Online.  The script editor web part allows a block of client-side script to be added to any wiki or web part page in a site.  Combined with the new SharePoint REST APIs, the script editor web part can deliver mash-ups very similar to cross-site publishing and the content search web part.  Unlike apps for SharePoint, the script editor isn’t constrained by IFRAME containers, app permissions, or templating limitations.  In fact, a well-configured script editor web part could be exported and re-imported into the web part gallery for reuse.

Cross-site publishing leverages “catalogs” for precise querying of specific content.  Any List/Library can be designated as a catalog.  By making this designation, SharePoint will automatically create managed properties for columns of the List/Library and ultimately generate a search result source in sites that consume the catalog.  Although SharePoint Online doesn’t support catalogs, it support the building blocks such as managed properties and result sources.  These can be manually configured to provide the same precise querying in SharePoint Online and exploited in the script editor web part for display.

Calling Search REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “search/query?Querytext=’ContentType:News'”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

An easier approach might be to directly reference a list/library in the REST call of our client-side script.  This wouldn’t require manual search configuration and would provide real-time publishing (no waiting for new items to get indexed).  You could think of this approach similar to a content by query web part across site collections (possibly even farms) and the REST API makes it all possible!

List REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

The content search web part uses display templates to render search results in different arrangements (ex: list with images, image carousel, etc).  There are two types of display templates the content search web part leverages…the control template, which renders the container around the items, and the item template, which renders each individual item in the search results.  This is very similar to the way a Repeater control works in ASP.NET.  Display templates are authored using HTML, but are converted to client-side script automatically by SharePoint for rendering.  I mention this because our approach is very similar…we will leverage a container and then loop through and render items in script.  In fact, all the examples in this post were converted from display templates in a public site I’m working on. 

Item display template for content search web part

<!–#_
var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId() + “_ImageTitle_”);
var rem = index % 3;
var even = true;
if (rem == 1)
    even = false;

var pictureURL = $getItemValue(ctx, “Picture URL”);
var pictureId = encodedId + “picture”;
var pictureMarkup = Srch.ContentBySearch.getPictureMarkup(pictureURL, 140, 90, ctx.CurrentItem, “mtcImg140”, line1, pictureId);
var pictureLinkId = encodedId + “pictureLink”;
var pictureContainerId = encodedId + “pictureContainer”;
var dataContainerId = encodedId + “dataContainer”;
var dataContainerOverlayId = encodedId + “dataContainerOverlay”;
var line1LinkId = encodedId + “line1Link”;
var line1Id = encodedId + “line1”;
 _#–>
<div style=”width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;”>
   <a href=”_#= linkURL =#_”>
      <div style=”float: left; width: 140px; padding-right: 10px;”>
         <img src=”_#= pictureURL =#_” class=”mtcImg140″ style=”width: 140px;” />
      </div>
      <div style=”float: left; width: 170px”>
         <div class=”mtcProfileHeader mtcProfileHeaderP”>_#= line1 =#_</div>
      </div>
   </a>
</div>

 

Script equivalent

<div id=”divUnfeaturedNews”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //get the details for each item
                var listData = data.d.results;
                var itemCount = listData.length;
                var processedCount = 0;
                var ul = $(“<ul style=’list-style-type: none; padding-left: 0px;’ class=’cbs-List’>”);
                for (i = 0; i < listData.length; i++) {
                    $.ajax({
                        url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
                        type: “GET”,
                        headers: { “Accept”: “application/json;odata=verbose” },
                        success: function (data) {
                            processedCount++;
                            var htmlStr = “<li style=’display: inline;’><div style=’width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;’>”;
                            htmlStr += “<a href=’#’>”;
                            htmlStr += “<div style=’float: left; width: 140px; padding-right: 10px;’>”;
                            htmlStr += setImageWidth(data.d.PublishingRollupImage, ‘140’);
                            htmlStr += “</div>”;
                            htmlStr += “<div style=’float: left; width: 170px’>”;
                            htmlStr += “<div class=’mtcProfileHeader mtcProfileHeaderP’>” + data.d.Title + “</div>”;
                            htmlStr += “</div></a></div></li>”;
                            ul.append($(htmlStr))
                            if (processedCount == itemCount) {
                                $(“#divUnfeaturedNews”).append(ul);
                            }
                        },
                        error: function (data) {
                            alert(data.statusText);
                        }
                    });
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });

    function setImageWidth(imgString, width) {
        var img = $(imgString);
        img.css(‘width’, width);
        return img[0].outerHTML;
    }
</script>

 

Even one of the more complex carousel views from my site took less than 30min to convert to the script editor approach.

Advanced carousel script

<div id=”divFeaturedNews”>
    <div class=”mtc-Slideshow” id=”divSlideShow” style=”width: 610px;”>
        <div style=”width: 100%; float: left;”>
            <div id=”divSlideShowSection”>
                <div style=”width: 100%;”>
                    <div class=”mtc-SlideshowItems” id=”divSlideShowSectionContainer” style=”width: 610px; height: 275px; float: left; border-style: none; overflow: hidden; position: relative;”>
                        <div id=”divFeaturedNewsItemContainer”>
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 1&$top=4”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                var listData = data.d.results;
                for (i = 0; i < listData.length; i++) {
                    getItemDetails(listData, i, listData.length);
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });
    var processCount = 0;
    function getItemDetails(listData, i, count) {
        $.ajax({
            url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                processCount++;
                var itemHtml = “<div class=’mtcItems’ id=’divPic_” + i + “‘ style=’width: 610px; height: 275px; float: left; position: absolute; border-bottom: 1px dotted #ababab; z-index: 1; left: 0px;’>”
                itemHtml += “<div id=’container_” + i + “‘ style=’width: 610px; height: 275px; float: left;’>”;
                itemHtml += “<a href=’#’ title='” + data.d.Caption_x005f_x0020_x005f_Title + “‘ style=’width: 610px; height: 275px;’>”;
                itemHtml += data.d.Feature_x005f_x0020_x005f_Image;
                itemHtml += “</a></div></div>”;
                itemHtml += “<div class=’titleContainerClass’ id=’divTitle_” + i + “‘ data-originalidx='” + i + “‘ data-currentidx='” + i + “‘ style=’height: 25px; z-index: 2; position: absolute; background-color: rgba(255, 255, 255, 0.8); cursor: pointer; padding-right: 10px; margin: 0px; padding-left: 10px; margin-top: 4px; color: #000; font-size: 18px;’ onclick=’changeSlide(this);’>”;
                itemHtml += data.d.Caption_x005f_x0020_x005f_Title;
                itemHtml += “<span id=’currentSpan_” + i + “‘ style=’display: none; font-size: 16px;’>” + data.d.Caption_x005f_x0020_x005f_Body + “</span></div>”;
                $(‘#divFeaturedNewsItemContainer’).append(itemHtml);

                if (processCount == count) {
                    allItemsLoaded();
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    }
    window.mtc_init = function (controlDiv) {
        var slideItems = controlDiv.children;
        for (var i = 0; i < slideItems.length; i++) {
            if (i > 0) {
                slideItems[i].style.left = ‘610px’;
            }
        };
    };

    function allItemsLoaded() {
        var slideshows = document.querySelectorAll(“.mtc-SlideshowItems”);
        for (var i = 0; i < slideshows.length; i++) {
            mtc_init(slideshows[i].children[0]);
        }

        var div = $(‘#divTitle_0’);
        cssTitle(div, true);
        var top = 160;
        for (i = 1; i < 4; i++) {
            var divx = $(‘#divTitle_’ + i);
            cssTitle(divx, false);
            divx.css(‘top’, top);
            top += 35;
        }
    }

    function cssTitle(div, selected) {
        if (selected) {
            div.css(‘height’, ‘auto’);
            div.css(‘width’, ‘300px’);
            div.css(‘top’, ’10px’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’26px’);
            div.css(‘padding-top’, ‘5px’);
            div.css(‘padding-bottom’, ‘5px’);
            div.find(‘span’).css(‘display’, ‘block’);
        }
        else {
            div.css(‘height’, ’25px’);
            div.css(‘width’, ‘auto’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’18px’);
            div.css(‘padding-top’, ‘0px’);
            div.css(‘padding-bottom’, ‘0px’);
            div.find(‘span’).css(‘display’, ‘none’);
        }
    }

    window.changeSlide = function (item) {
        //get all title containers
        var listItems = document.querySelectorAll(‘.titleContainerClass’);
        var currentIndexVals = { 0: null, 1: null, 2: null, 3: null };
        var newIndexVals = { 0: null, 1: null, 2: null, 3: null };

        for (var i = 0; i < listItems.length; i++) {
            //current Index
            currentIndexVals[i] = parseInt(listItems[i].getAttribute(‘data-currentidx’));
        }

        var selectedIndex = 0; //selected Index will always be 0
        var leftOffset = ”;
        var originalSelectedIndex = ”;

        var nextSelected = ”;
        var originalNextIndex = ”;

        if (item == null) {
            var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
            originalSelectedIndex = parseInt(item0.getAttribute(‘data-originalidx’));
            originalNextIndex = originalSelectedIndex + 1;
            nextSelected = currentIndexVals[0] + 1;
        }
        else {
            nextSelected = item.getAttribute(‘data-currentidx’);
            originalNextIndex = item.getAttribute(‘data-originalidx’);
        }

        if (nextSelected == 0) { return; }

        for (i = 0; i < listItems.length; i++) {
            if (currentIndexVals[i] == selectedIndex) {
                //this is the selected item, so move to bottom and animate
                var div = $(‘[data-currentidx=”0″]’);
                cssTitle(div, false);
                div.css(‘left’, ‘-400px’);
                div.css(‘top’, ‘230px’);

                newIndexVals[i] = 3;
                var item0 = document.querySelector(‘[data-currentidx=”0″]’);
                originalSelectedIndex = item0.getAttribute(‘data-originalidx’);

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else if (currentIndexVals[i] == nextSelected) {
                //this is the NEW selected item, so resize and slide in as selected
                var div = $(‘[data-currentidx=”‘ + nextSelected + ‘”]’);
                cssTitle(div, true);
                div.css(‘left’, ‘-610px’);

                newIndexVals[i] = 0;

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else {
                //move up in queue
                var curIdx = currentIndexVals[i];
                var div = $(‘[data-currentidx=”‘ + curIdx + ‘”]’);

                var topStr = div.css(‘top’);
                var topInt = parseInt(topStr.substring(0, topStr.length – 1));

                if (curIdx != 1 && nextSelected == 1 || curIdx > nextSelected) {
                    topInt = topInt – 35;
                    if (curIdx – 1 == 2) { newIndexVals[i] = 2 };
                    if (curIdx – 1 == 1) { newIndexVals[i] = 1 };
                }

                //move up
                div.animate(
                    { top: topInt }, 500, function () {
                    });
            }
        };

        if (originalNextIndex < 0)
            originalNextIndex = itemCount – 1;

        //adjust pictures
        $(‘#divPic_’ + originalNextIndex).css(‘left’, ‘610px’);
        leftOffset = ‘-610px’;

        $(‘#divPic_’ + originalSelectedIndex).animate(
            { left: leftOffset }, 500, function () {
            });

        $(‘#divPic_’ + originalNextIndex).animate(
            { left: ‘0px’ }, 500, function () {
            });

        var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
        var item1 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[1] + ‘”]’);
        var item2 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[2] + ‘”]’);
        var item3 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[3] + ‘”]’);
        if (newIndexVals[0] != null) { item0.setAttribute(‘data-currentidx’, newIndexVals[0]) };
        if (newIndexVals[1] != null) { item1.setAttribute(‘data-currentidx’, newIndexVals[1]) };
        if (newIndexVals[2] != null) { item2.setAttribute(‘data-currentidx’, newIndexVals[2]) };
        if (newIndexVals[3] != null) { item3.setAttribute(‘data-currentidx’, newIndexVals[3]) };
    };
</script>

 

End-result of script editors in SharePoint Online

Separate authoring site collection

Final Thoughts

How To : Use Git Tools for TFS Integration

Git – TFS Integration – Why it matters

 taking-your-version-control-to-a-next-level-with-tfs-and-git-1-638[1]

For many small development shops, the idea of using TFS and a centralized source control repository is anathema. The mere thought of being restricted by a software configuration manager on how and when to branch or merge cuts against everything they cherish in software development.

 

Git is their natural and chosen ground for managing source code. The freedom and flexibility of using Git enables them to work where they are. This is especially true if they are working as part of a distributed team on modular projects.

Microsoft addressed many of the existing concerns with TFS source control with the advent of TFS 2012 and local workspaces. However, even though local workspaces enable great flexibility in offline work, they are still ultimately tied to a central repository and the policies and restrictions imposed on it.

 

Enter Git support in TFS. Git support currently comes in two forms; stand alone Git support in Visual Studio and Git support with TFS.

Git support with Visual Studio is completely straightforward. Simply change the source control plug-in selection to the Microsoft Git Provider and all the power and flexibility of Git is available to the Visual Studio developer such as private branches and online collaboration with Git hosts such as GitHub and BitBucket.

Source

Configuring Git for Visual Studio Source Control

However, from an ALM perspective, the real power and the compelling feature of Microsoft’s integration with Git is the ability to work with TFS.

 

Developers still get all the advantages and flexibility of Git, but can also take advantage of the ALM features of TFS such as work item tracking, team tools and integrated build. The Git – TFS integration gets us much closer to the ultimate goal of true cross-platform support in a single ALM toolset.

 

The TFS – Git integration can be utilized a couple of ways. The first option is the ability to essentially synchronize a Git repository with TFS source control with the Git-TF utility. This utility makes it easy to clone sources from TFS, fetch updates from TFS and push changes back to TFS.

 

What’s more, it fully supports TFS shelvesets and work item integration, which presents some exciting possibilities. The features and functionality Git-TF provides makes it a compelling solution and a credible compromise between centrally managed teams with source control and distributed teams with distributed source control.

The second option, available now only through Microsoft’s hosted TFS Service, is the ability for organizations to create TFS Team Projects with Git hosted source control (this ability is reportedly planned for on premise TFS support in the next release). This is a fairly exciting development.

 

Having the choice between native TFS version control and Git when creating a team project opens many doors that hitherto were locked shut.

remoteRepo1

XCode IDE connected to a TFS hosted repository

Eclipse, XCode, Visual Studio and any other IDE that supports Git can now be used to leverage the powerful ALM features TFS provides.

As an ALM consultant, that’s the part that excites me the most. Hosting all development efforts in a single environment; an environment that supports all the various technologies in play and being able to track and manage those efforts with agility and transparency is a huge benefit to any organization that provides multiple platform solutions.

 

Even those who don’t, will now have the option to at least evaluate the feasibility of utilizing TFS in development environments not typically associated with a Microsoft project.

The mythical promised land of cross-platform ALM may have just become quite less mythical.

 

———————Microsoft’s Tool for Git and TFS Integration – ———————————————————————–

 http://gittf.codeplex.com

2352.vs_5F00_heart_5F00_git_5F00_thumb_5F00_66558A9C[1]

Working with Teams

The Git-TF tool is most easily used by a single developer or multiple developers working independently with their own isolated Git repos. That is, each developer uses Git-TF to clone a local repo where they can then use Git to manage their local development that will eventually be checked in to TFS. In this “hub and spoke” configuration, all code is shared through TFS at the “hub” and each developer using Git becomes a “spoke”. Developers looking to collaborate using Git’s distributed sharing capabilities will want to work in a specific configuration described below.

Most often, developers collaborating with Git have cloned from a common repo. When it comes time to share divergent changes, conflict resolution is easy because each repository shares the same common base version. Many times, conflicts are automatically resolved. One of the keys to this merging of histories is that each commit is assigned a unique identifier that is generated by the contents of the commit. When working with Git-TF, two repositories cloned from the same TFS path will not have the same commit IDs unless the clones were done at the same point in TFS history, and with the same depth. In the event that two Git repos that were independently cloned using Git-TF share changes directly, the result will be a baseless merge of the repositories and a large number of conflicts. For this reason, it is not recommended that teams using Git-TF ever share changes directly through Git (i.e. using git push and git pull).

Instead, it is recommended that a team working with Git-TF and collaborating with Git do so by designating a single repo as the point of contact with TFS. This configuration may look as follows for a team of three developers:

          [TFS]      [Shared Git repo]
            |         ^ (2)  |       \
            |        /       |        \
            |       /        |         \
            V (1)  /         V (3)      V (4)
       [Alice's Repo]   [Bob's Repo]   [Charlie's Repo]
 

In the configuration above the actions would be as follows:

  1. Using the git tf clone command, Alice clones a path from TFS into a local Git repo.
  2. Next, Alice uses git push to push the commit created in her local Git repo into the team’s shared Git repo.
  3. Bob can then use git clone to clone down the changes that Alice pushed.
  4. Charlie can also use git clone to clone down the changes that Alice pushed.

Both Bob and Charlie only ever interact with the team’s shared Git repo using git push and git pull. They can also interact directly with one another’s repos (or with Alice’s) , but should never use Git-TF commands to interact with TFVC.

When working with the team, Alice will typically develop locally and use git push and git pull to share changes with the team. When the team decides they have changes to share with TFS, Alice will use a git tf checkin to share those changes (typically a git tf checkin –shallow will be used). Likewise, if there are changes that the team needs from TFVC, Alice will perform a git tf pull, using the –merge or –rebase options as appropriate, and then use git push to share the changes with the team.

Note that (until Issue 77 is addressed) all changes coming into the TFVC repository will come in as if from Alice’s TFS identity. This is fine if only Alice has an identity on that TFVC project but it may well not be what you want if Bob and Charlie also had valid identities in that TFS project.

Rebase vs. Merge

Once changes have been fetched from TFS using git tf pull (or git tf fetch), those changes must either be merged with the HEAD or have any changes since the last fetch rebased on top of FETCH_HEAD. Git-TF allows developers to work in either manner, though if the repo that is sharing changes with TFS has shared any commits with other Git users, then this rebase may result in significant conflicts (see The Perils of Rebasing). For this reason, it is recommended that any team working in the aforementioned team configuration use git tf pull with the default –merge option (or use git merge FETCH_HEAD to incorporate changes made in TFS after fetching manually).

Recommended Git Settings

When using the Git-TF tools, there are a few recommended settings that should make it easier to work with other developers that are using TFS.

Line Endings

core.autocrlf = false

Git has a feature to allow line endings to be normalized for a repository, and it provides options for how those line endings should be set when files are checked out. TFS does not have any feature to normalize line endings – it stores exactly what is checked in by the user. When using Git-TF, choosing to normalize line endings to Unix-style line endings (LF) will likely result in TFS users (especially those using VS) changing the line endings back to Windows-style line endings (CRLF). As a result, it is recommended to set the core.autocrlf option to false, which will keep line endings unchanged in the Git repo.

Ignore case

core.ignorecase = true

TFS does not allow multiple files that differ only in case to exist in the same folder at the same time. Git users working on non-Windows machines could commit files to their repo that differ only in case, and attempting to check in those changes to TFS will result in an error. To avoid these types of errors, the core.ignorecase option should be set to true.

How To : Create, Edit and Maintaining a Coded UI Test for Silverlight Application

Using the Microsoft Visual Studio 2013 Coded UI Test plugin for Silverlight, you can create Coded UI Tests or action recordings for Silverlight 5.0 applications.

Using Microsoft Microsoft Visual Studio 2010 Feature Pack 2, you can create coded UI tests or action recordings for Silverlight 4 applications. Action recordings let you fast forward through steps in a manual test. For more information about action recordings or coded UI tests, see How to: Create an Action Recording or How to: Create a Coded UI Test.

In this walkthrough, you will learn the procedures that are required to test a Silverlight control in a Silverlight based application. The walkthrough takes you through the following procedures:

Prerequisites

 

For this walkthrough you will need:

To prepare the walkthrough

  1. Verify that you have the Silverlight 4 developer runtime available at Silverlight Developer 4 for Developers.

  2. Verify that you have completed the procedures in Walkthrough: Creating a RIA Services Solution.

    The result will be a simple Silverlight application that uses a Silverlight grid control. Later, you will use the grid control in this walkthrough and perform coded UI tests on it.

  3.  

     

    For more information about supported and unsupported Silverlight controls, see How to: Set Up Your Silverlight Application for Testing.

  4. With the RIAServicesExample you created in Walkthrough: Creating a RIA Services Solution running, copy the address of the Web application to the clipboard or a notepad file. For example, the address might resemble this: http://localhost: <port number>/RIAServicesExampleTestPage.aspx.

Add the SilverlightUIAutomationHelper.dll to Your Silverlight 4 Project

 

To test your Silverlight applications, you must add Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper.dll as a reference to your Silverlight 4 application so that the Silverlight controls can be identified. This helper assembly instruments your Silverlight application to enable the information about a control to be available to the Silverlight plugin API that you use in your coded UI test or is used for an action recording.This assembly cannot be redistributed. Therefore, you must add this reference conditionally when you want to build the application. By taking this approach the assembly is not redistributed when you deploy your software to a customer.

To add the SilverlightUIAutomationHelper.dll

  1. For each Silverlight project in your solution that you want to test, you must add the SilverlightUIAutomationHelper.dll. In Solution Explorer, right-click the RIAServicesExample project, select Unload Project.

    The project is displayed in Solution Explorer as RIAServicesExample (unavailable).

  2. Right-click the project again and then click Edit RIAServicesExample.csproj.

    The RIAServicesExample.csproj file is opened in the Code Editor. You will see <PropertyGroup> nodes followed by <ItemGroup> nodes. You must make the following two modifications:

    1. To set the production condition, add the following entry to the first <PropertyGroup> node:

       
      <Production Condition="'$(Production)'==''">False</Production>
      
    2. To add the DLL when the build is not a production build, insert the following <Choose> node after the <PropertyGroup> nodes, but before the <ItemGroup> nodes:

       
      <Choose>
         <When Condition=" '$(Production)'=='False' ">
               <ItemGroup>
                 <Reference Include="Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper">
                 </Reference>
               </ItemGroup>
             </When>
       </Choose>
      
  3. To save the file, click Save.

  4. To reload these changes, right-click the server project and then click Reload Project

    Caution noteCaution

    If you have multiple Silverlight projects that you want to test, you must follow these steps for each project.

    Important noteImportant

    To remove the SilverlightUIAutomationHelper.dll so that it is not redistributed with your production code, set the production condition value to true in the first <PropertyGroup> node. In in this manner, the DLL is no longer added as a reference by the Choose node that you added to the project in the previous procedure. You can also set an environment variable named Production to the value True. Then you can use msbuild to build the Silverlight project and remove the SilverlightUIAutomationHelper.dll.

Create a Coded UI Test for RIAServicesExample Silverlight Application

 

To Create a Coded UI Test

  1. In Solution Explorer, right-click the solution, click Add and then select New Project.

    The Add New Project dialog box appears.

  2. In the Installed Templates pane, expand either Visual C# or Visual Basic, and then select Test.

  3. In the middle pane, select the Test Project template.

  4. Click OK.

    In Solution Explorer, the new test project named TestProject1 is added to your solution. Either the UnitTest1.cs or UnitTest1.vb file appears in the Code Editor. You can close the UnitTest1 file because it is not used in this walkthrough.

  5. In Solution Explorer, right-click TestProject1, click Add and then select Coded UI test.

    The Generate Code for Coded UI Test dialog box appears.

  6. Select the Record actions, edit UI map or add assertions option and then click OK.

    The UIMap – Coded UI Test Builder appears.

    For more information about the options in the dialog box, see How to: Create a Coded UI Test.

  7. Click Start Recording on the UIMap – Coded UI Test Builder. In several seconds, the Coded UI Test Builder will be ready.

    Start recording UI

  8. Launch Internet Explorer.

  9. In Internet Explorer’s address bar, enter the address of the Web application that you copied in a previous procedure. For example:

    http://localhost: <port number>/RIAServicesExampleTestPage.aspx

  10. Click one or two of the column headers to sort the data.

  11. Close Internet Explorer.

  12. On the UIMap – Coded UI Test Builder, click Generate Code.

  13. In the Method Name type SimpleSilverlightAppTest and then click Add and Generate. In several seconds, the Coded UI test appears and is added to the Solution.

  14. Close the UIMap – Coded UI Test Builder.

    The CodedUITest1.cs file appears in the Code Editor.

    NoteNote

    You can assign a unique automation property based on the type of Silverlight control in your application. For more information, see Set a Unique Automation Property for Silverlight Controls for Testing.

Run the Coded UI Test on the RIAServicesExample Silverlight Application

 

To run the coded UI test

  • On the Test menu, select Windows and then click Test View.In Test View, select CodedUITestMethod1 under the Test Name column and then click Run Selection in the toolbar.

    The coded UI test should successfully run using the Silverlight data grid control.

How To : Using the Proxy Pattern to implement Code Access Security

There are many ways to secure different parts of your application. The security of running code in .NET revolves around the concept of Code Access Security (CAS).

CAS

CAS determines the trustworthiness of an assembly based upon its origin and the characteristics of the assembly itself, such as its hash value. For example, code installed locally on the machine is more trusted than code downloaded from the Internet. The runtime will also validate an assembly’s metadata and type safety before that code is allowed to run.

 

There are many ways to write secure code and protect data using the .NET Framework. In this chapter, we explore such things as controlling access to types, encryption and decryption, random numbers, securely storing data, and using programmatic and declarative security.

 

Controlling Access to Types in a Local Assembly

 

Problem

You have an existing class that contains sensitive data, and you do not want clients to have direct access to any objects of this class. Instead, you want an intermediary object to talk to the clients and to allow access to sensitive data based on the client’s credentials. What’s more, you would also like to have specific queries and modifications to the sensitive data tracked, so that if an attacker manages to access the object, you will have a log of what the attacker was attempting to do.

Solution

Use the proxy design pattern to allow clients to talk directly to a proxy object. This proxy object will act as gatekeeper to the class that contains the sensitive data. To keep malicious users from accessing the class itself, make it private, which will at least keep code without the ReflectionPermissionFlag. MemberAccess access (which is currently given only in fully trusted code scenarios such as executing code interactively on a local machine) from getting at it.

The namespaces we will be using are:

 
 
  using System;
    using System.IO;
    using System.Security;
    using System.Security.Permissions;
    using System.Security.Principal;

Let’s start this design by creating an interface, as shown in Example 17-1, “ICompanyData interface”, that will be common to both the proxy objects and the object that contains sensitive data.

Example 17-1. ICompanyData interface

 
 
internal interface ICompanyData
{
    string AdminUserName
    {
        get;
        set;
    }

    string AdminPwd
    {
        get;
        set;
    }
    string CEOPhoneNumExt
    {
        get;
        set;
    }
    void RefreshData();
    void SaveNewData();
}

The CompanyData class shown in Example 17-2, “CompanyData class” is the underlying object that is “expensive” to create.

Example 17-2. CompanyData class

 
 
internal class CompanyData : ICompanyData
{
    public CompanyData()
    {
        Console.WriteLine("[CONCRETE] CompanyData Created");
        // Perform expensive initialization here.
        this.AdminUserName ="admin";
        this.AdminPd ="password";

        this.CEOPhoneNumExt ="0000";
    }

    public str ing AdminUserName
    {
        get;
        set;
    }

    public string AdminPwd
    {
        get;
        set;
    }

    public string CEOPhoneNumExt
    {
        get;
        set;
    }

    public void RefreshData()
    {
        Console.WriteLine("[CONCRETE] Data Refreshed");
    }

    public void SaveNewData()
    {
        Console.WriteLine("[CONCRETE] Data Saved");
    }
}

The code shown in Example 17-3, “CompanyDataSecProxy security proxy class” for the security proxy class checks the caller’s permissions to determine whether the CompanyData object should be created and its methods or properties called.

Example 17-3. CompanyDataSecProxy security proxy class

 
 
public class CompanyDataSecProxy : ICompanyData
{
    public CompanyDataSecProxy()
    {
        Console.WriteLine("[SECPROXY] Created");

        // Must set principal policy first.
        appdomain.CurrentDomain.SetPrincipalPolicy(PrincipalPolicy.
           WindowsPrincipal);
    }

    private ICompanyData coData = null;
    private PrincipalPermission admPerm =
        new PrincipalPermission(null, @"BUILTIN\Administrators", true);
    private PrincipalPermission guestPerm =
        new Pr incipalPermission(null, @"BUILTIN\Guest", true);
    private PrincipalPermission powerPerm =
        new PrincipalPermission(null, @"BUILTIN\PowerUser", true);
    private PrincipalPermission userPerm =
        new PrincipalPermission(null, @"BUILTIN\User", true);

    public string AdminUserName
    {
        get
        {
            string userName = ";
            try
            {
                admPerm.Demand();
                Startup();
                userName =coData.AdminUserName;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminUserName_get failed! {0}",e.ToString());
            }
            return (userName);
        }
        set
            {
            try
            {
                admPerm.Demand();
                Startup();
                coData.AdminUserName = value;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminUserName_set failed! {0}",e.ToString());
            }
        }
    }

    public string AdminPwd
    {
        get
        {
            string pwd = ";
            try
            {
                admPerm.Demand();
                Startup();
                pwd = coData.AdminPwd;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminPwd_get Failed! {0}",e.ToString());
            }

            return (pwd);
        }
        set
        {
            try
            {
                admPerm.Demand();
                Startup();
                coData.AdminPwd = value;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminPwd_set Failed! {0}",e.ToString());
            }
        }
    }

    public string CEOPhoneNumExt
    {
        get
        {
            string ceoPhoneNum = ";
            try
            {
                admPerm.Union(powerPerm).Demand();
                Startup();
                ceoPhoneNum = coData.CEOPhoneNumExt;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("CEOPhoneNum_set Failed! {0}",e.ToString());
            }
            return (ceoPhoneNum);
        }
        set
        {
            try
            {
                admPerm.Demand();
                Startup();
                coData.CEOPhoneNumExt = value;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("CEOPhoneNum_set Failed! {0}",e.ToString());
            }
        }
    }
    public void RefreshData()
    {
        try
        {
            admPerm.Union(powerPerm.Union(userPerm)).Dem and();
            Startup();
            Console.WriteLine("[SECPROXY] Data Refreshed");
            coData.RefreshData();
        }
        catch(SecurityException e)
        {
            Console.WriteLine("RefreshData Failed! {0}",e.ToString());
        }
    }

    public void SaveNewData()
    {
        try
        {
            admPerm.Union(powerPerm).Demand();
            Startup();
            Console.WriteLine("[SECPROXY] Data Saved");
            coData.SaveNewData();
        }
        catch(SecurityException e)
        {
            Console.WriteLine("SaveNewData Failed! {0}",e.ToString());
        }
    }

    // DO NOT forget to use [#define DOTRACE] to control the tracing proxy.
    private void Startup()
    {
        if (coData == null)
        {
#if (DOTRACE)
            coData = new CompanyDataTraceProxy();
#else
            coData = new CompanyData();
#endif
            Console.WriteLine("[SECPROXY] Refresh Data");
            coData.RefreshData();
        }
    }
}

When creating thePrincipalPermissions as part of the object construction, you are using string representations of the built-in objects (“BUILTIN\Administrators”) to set up the principal role. However, the names of these objects may be different depending on the locale the code runs under. It would be appropriate to use the WindowsAccountType.Administrator enumeration value to ease localization because this value is defined to represent the administrator role as well. We used text here to clarify what was being done and also to access the PowerUsers role, which is not available through the WindowsAccountType enumeration.

If the call to the CompanyData object passes through the CompanyDataSecProxy, then the user has permissions to access the underlying data. Any access to this data may be logged, so the administrator can check for any attempt to hack the CompanyData object. The code shown in Example 17-4, “CompanyDataTraceProxy tracing proxy class” is the tracing proxy used to log access to the various method and property access points in the CompanyData object (note that the CompanyDataSecProxy contains the code to turn this proxy object on or off).

Example 17-4. CompanyDataTraceProxy tracing proxy class

 
 
public class CompanyDataTraceProxy : ICompanyData
{    
    public CompanyDataTraceProxy() 
    {
        Console.WriteLine("[TRACEPROXY] Created");
        string path = Path.GetTempPath() + @"\CompanyAccessTraceFile.txt";
        fileStream = new FileStream(path, FileMode.Append,
            FileAccess.Write, FileShare.None);
        traceWriter = new StreamWriter(fileStream);
        coData = new CompanyData();
    }

    private ICompanyData coData = null;
    private FileStream fileStream = null;
    private StreamWriter traceWriter = null;

    public string AdminPwd
    {
        get
        {
            traceWriter.WriteLine("AdminPwd read by user.");
            traceWriter.Flush();
            return (coData.AdminPwd);
        }
        set
        {
            traceWriter.WriteLine("AdminPwd written by user.");
            traceWriter.Flush();
            coData.AdminPwd = value;
        }
    }

    public string AdminUserName
    {
        get
        {
            traceWriter.WriteLine("AdminUserName read by user.");
            traceWriter.Flush();
            return (coData.AdminUserName);
        }
        set
        {
            traceWriter.WriteLine("AdminUserName written by user.");
            traceWriter.Flush(); 
            coData.AdminUserName = value;
        }
    }

    public string CEOPhoneNumExt
    {
        get
        {
            traceWriter.WriteLine("CEOPhoneNumExt read by user.");
            traceWriter.Flush();
            return (coData.CEOPhoneNumExt);
        }
        set
        {
            traceWriter.WriteLine("CEOPhoneNumExt written by user.");
            traceWriter.Flush();
            coData.CEOPhoneNumExt = value;
        }
    }

    public void RefreshData()
    {
        Console.WriteLine("[TRACEPROXY] Refresh Data");
        coData.RefreshData();
    }

    public void SaveNewData()
    {
        Console.WriteLine("[TRACEPROXY] Save Data");
        coData.SaveNewData();
    }
}

The proxy is used in the following manner:

 
 
   // Create the security proxy here.
    CompanyDataSecProxy companyDataSecProxy = new CompanyDataSecProxy( );

    // Read some data.
    Console.WriteLine("CEOPhoneNumExt: " + companyDataSecProxy.CEOPhoneNumExt);

    // Write some data.
    companyDataSecProxy.AdminPwd = "asdf";
    companyDataSecProxy.AdminUserName = "asdf";

    // Save and refresh this data.
    companyDataSecProxy.SaveNewData( );
    companyDataSecProxy.RefreshData( );

Note that as long as the CompanyData object was accessible, you could have also written this to access the object directly:

 
 
    // Instantiate the CompanyData object directly without a proxy.
    CompanyData companyData = new CompanyData( );

    // Read some data.
    Console.WriteLine("CEOPhoneNumExt: " + companyData.CEOPhoneNumExt);

    // Write some data.
    companyData.AdminPwd = "asdf";
    companyData.AdminUserName = "asdf";

    // Save and refresh this data.
    companyData.SaveNewData();
    companyData.RefreshData();

If these two blocks of code are run, the same fundamental actions occur: data is read, data is written, and data is updated/refreshed. This shows you that your proxy objects are set up correctly and function as they should.

Discussion

The proxy design pattern is useful for several tasks. The most notable-in COM, COM+, and .NET remoting-is for marshaling data across boundaries such as AppDomains or even across a network. To the client, a proxy looks and acts exactly the same as its underlying object; fundamentally, the proxy object is just a wrapper around the object.

 

A proxy can test the security and/or identity permissions of the caller before the underlying object is created or accessed. Proxy objects can also be chained together to form several layers around an underlying object. Each proxy can be added or removed depending on the circumstances.

 

For the proxy object to look and act the same as its underlying object, both should implement the same interface. The implementation in this recipe uses an ICompanyData interface on both the proxies (CompanyDataSecProxy and CompanyDataTraceProxy) and the underlying object (CompanyData). If more proxies are created, they, too, need to implement this interface.

 

The CompanyData class represents an expensive object to create. In addition, this class contains a mixture of sensitive and nonsensitive data that requires permission checks to be made before the data is accessed. For this recipe, the CompanyData class simply contains a group of properties to access company data and two methods for updating and refreshing this data. You can replace this class with one of your own and create a corresponding interface that both the class and its proxies implement.

 

The CompanyDataSecProxy object is the object that a client must interact with. This object is responsible for determining whether the client has the correct privileges to access the method or property that it is calling. The get accessor of the AdminUserName property shows the structure of the code throughout most of this class:

 
 
  public string AdminUserName
    {
        get
        {
            string userName = ";
            try
            {
                admPerm.Demand( );
                Startup( );
                userName = coData.AdminUserName;
            }
            catch(SecurityException e)
            {
               Console.WriteLine("AdminUserName_get ailed!: {0}",e.ToString( ));
            }
            return (userName);
        }
        set
        {
            try
            {
                admPerm.Demand( );
                Startup( );
                coData.AdminUserName = value;
            }
            catch(SecurityException e)
            {
               Console.WriteLine("AdminUserName_set Failed! {0}",e.ToString( ));
            }
        }
    }

Initially, a single permission (AdmPerm) is demanded. If this demand fails, a SecurityException, which is handled by the catch clause, is thrown. (Other exceptions will be handed back to the caller.) If the Demand succeeds, the Startup method is called. It is in charge of instantiating either the next proxy object in the chain (CompanyDataTraceProxy) or the underlying CompanyData object. The choice depends on whether the DOTRACE preprocessor symbol has been defined. You may use a different technique, such as a registry key to turn tracing on or off, if you wish.

 

This proxy class uses the private field coData to hold a reference to an ICompanyData type, which can be either a CompanyDataTraceProxy or the CompanyData object. This reference allows you to chain several proxies together.

In the marketSenior C# & SharePoint Developer in the market

10 years experience in various industries

BSC Degree in Computer Science (Cum Laude)

tomas.floyd@outlook.com

https://sharepointsamurai.wordpress.com

 

The CompanyDataTraceProxy simply logs any access to the CompanyData object’s information to a text file. Since this proxy will not attempt to prevent a client from accessing the CompanyData object, the CompanyData object is created and explicitly called in each property and method of this object.

A Look At : Application Management and Governance in SharePoint 2013

Summary:Learn how to govern applications for SharePoint 2013 by creating a customization policy and understanding the app model, branding, and life-cycle management.

8322.sharepoint_2D00_2010_5F00_4855E582[1]

How will you manage the applications that are developed for your environment? What customizations do you allow in your applications, and what are your processes for managing those applications?

 

For effective and manageable applications, your organization should consider the following:

  • Customization policy   SharePoint 2013 includes customizable features and capabilities that span multiple product areas, such as business intelligence, forms, workflow, and content management. Customization can introduce risks to the stability, maintenance, and security of the environment. To support customization while controlling its scope, you should develop a customization policy.
  • Life-cycle management   Follow best practices to manage applications and keep your environments in sync.
  • Branding   If you are designing an information architecture and a set of sites to use across an organization, consider including branding in your governance plan. A formal set of branding policies helps ensure that sites consistently use enterprise imagery, fonts, themes, and other design elements.
  • Solutions or apps for SharePoint?   Decide whether a solution or an app for SharePoint would be the best choice for specific customizations.

Get developer guidance about customizing and branding SharePoint 2013 on MSDN: Build sites for SharePoint 2013.

Foundation icon This article is part of a set of articles about governance. The following articles describe other aspects of governance:

The What is governance? poster gives a summary of this content. Download the PDF version or Visio version, or Zoom into the model in full detail with Zoom.it from Microsoft.

Determine the types of customizations you want to allow and how to manage them. Your customization policy should include:

  • Service-level descriptions   What are the parameters for supporting and managing customizations in your environments? See Service-level agreements.
  • Guidelines for updating customizations   How do you manage changes to customizations, and how do you roll out those changes to your environments? Consider ways to manage source code, such as a source control system and standards for documenting the code.
  • Processes for analyzing   How do you understand whether a particular customization is working well in your environment, or how do you decide which ones to create, change, or retire?
  • Approved tools for customization   Consider development standards, such as coding best practices and the tools that you will to use across your organization. For example, you should decide whether to allow the use of SharePoint Designer 2013 and Design Manager, and specify which site elements can be customized and by whom.
  • Process for piloting and testing customizations   How do you test and deploy customizations? How many people should be in a pilot testing group? What are your standards for testing and validating customizations?
  • Who is responsible for ongoing support   Who will be responsible for supporting customizations in your environments—individual teams or a central group?
  • Guidelines for packaging and deploying customizations   Do you have individual packages for each, or do you include several in a feature or solution? Which customizations should be apps for SharePoint instead of solutions? How do you ensure that customizations in one environment do not affect the rest of your SharePoint implementation?
  • Specific policies regarding each potential type of customization   What types of customizations do you allow?

    For more information about kinds of customizations and their potential risks, see the Customizations table later in this article. For more information about processes for managing customizations, see the white paper SharePoint Products and Technologies customization policy. Most of this content still applies to SharePoint 2013.

  • Policies around using the App Catalog and SharePoint Store Which apps for SharePoint do you want to make available to your organization? Can users purchase apps directly? See Solutions or apps for SharePoint? later in this article for more information.

The highly customizable design of SharePoint products enables you to provide the look, behavior, or functionality that meets your business needs. Customizations can introduce risk to your environment, whether that risk is to the environment’s performance, availability, or supportability. Conversely, a “no customizations” policy severely restricts your organization’s ability to take advantage of the SharePoint platform.

All customizations are not the same. You must decide carefully which kinds of customizations to allow in your environment. You must ensure the customizations support the performance, availability, and supportability you want for your environment. Your governance policy should balance a level of acceptable risk against the business needs for your organization.

What is considered a customization? All of the following are considered kinds of customizations in SharePoint products:

  • Configuration   Using the SharePoint user interface to configure SharePoint products.
  • Branding   Changing logos, styles, colors, master pages and page layouts, and so on to create a custom look for your SharePoint sites. See more about branding.
  • Custom code   Using developer tools to add or change functionality in SharePoint products or to interact with other applications. Risk can vary depending on kind of functionality and level of trust (full trust solutions should be rarely used; consider apps for SharePoint first).
    TipTip:
    Sandboxed solutions are deprecated in this release, so they are not the best option for custom code in the long term

Some customizations have very little risk or impact on your environment. Others have the potential for much higher risk and impact. The following table provides examples of different kinds of customizations, the risk level associated with that kind of customization, and potential issues that you might face if you allow that kind of customization.

Customizations

Risk level Types of customizations and examples Considerations or impact
Unsupported/High Unsupported customizations such as direct changes to the database schema or modifying files on the file system.
  • Will not be supported through Microsoft Customer Support.
  • Will be unable to upgrade.

Do not use.

Moderate to high Creating applications that interact with or redirect actions in key pipelines, such as events, claims, and so on.
  • Potential for service outage or performance issues.
  • Might require rework at upgrade.
Moderate to low Using a custom Web Part outside a sandbox environment, creating custom actions such as adding a menu item, or creating a custom site provisioning process.
  • Short or long-term performance issues or page errors.
  • Might require rework at upgrade.
Low Using solutions in a sandbox environment. Short-term performance issues; you can avoid some performance issues by using resource throttling and quotas.
Very low to no risk Using apps for SharePoint or using functionality within the product or configurations, such as associating a workflow with a list or using an instance of a built in Web Part. Minor configuration or page errors that would have to be addressed. Apps can be uninstalled or updated.
NoteNote:
For more information about customizations and upgrade, see Considerations for specific customizations.

 

 

Also, when you think through the customizations to allow in your environment, consider carefully whether a particular customization is necessary. If it recreates functionality that is already available in the product (such as creating a Web Part that does the same thing as the Content Editor Web Part or the Content by Query Web Part), then that might be unnecessary work.

Consider first whether the standard functionality can do what you want, or check the SharePoint Store to see if there is an app for SharePoint available that does what you need.

Follow these best practices to manage applications based on SharePoint 2013 throughout their life cycle:

  • Use separate development, preproduction, and production environments, and keep these environments as synchronized as possible so that you can accurately test your customizations.
  • Test all customizations before releasing the first time and after any updates have been made before you release them to your production environment.
  • Use source code control and solution and feature versioning to track changes to code.

Development, test, and production environments

Consistent branding with a corporate style guide makes for more cohesive-looking sites and easier development. Store approved themes in the theme gallery for consistency so that users will know when they visit the site that they are in the right place.

SharePoint 2013 includes a new feature to use for branding, Design Manager. By using Design Manager, you can create a visual design for your website with whatever web design tool or HTML editor you prefer and then upload that design into SharePoint. Design Manager is the central hub and interface where you manage all aspects of a custom design.

Creating the visual design of a site often fits into a larger process, in which multiple people or organizations are involved. For a roadmap of the tasks from a larger perspective, see Design and branding in SharePoint 2013.

SharePoint 2013 has a new development model based on apps for SharePoint. Apps for SharePoint are self-contained pieces of functionality that extend the capabilities of a SharePoint website. An app may include SharePoint features such as lists, workflows, and site pages, but it can also use a remote web application and remote data in SharePoint. An app has few or no dependencies on any other software on the device or platform where it is installed, other than what is built into the platform. Apps have no custom code that runs on the SharePoint servers.

The guidance for whether to use apps for SharePoint or SharePoint solutions is to:

  • Design apps for end users

    Apps for SharePoint:

    • Are easy for users (tenant administrators and site owners) to discover and install.
    • Use safe SharePoint extensions.
    • Provide the flexibility to develop future upgrades.
    • Can integrate with cloud-based resources.
    • Are available for both SharePoint Online and on-premises SharePoint sites.
  • Use farm solutions for administrators

    SharePoint solutions:

    • Can access the server-side object-model APIs that are needed to extend SharePoint management, configuration, and security
    • Can extend Central Administration, Windows PowerShell cmdlets, timer jobs, custom backups, and so on.
    • Are installed by administrators.
    • Can have farm, web application, or site-collection scope.

Go to MSDN to get more information about the new development model, Apps for SharePoint compared with SharePoint solutions, and Deciding between apps for SharePoint and SharePoint solutions.

Set a policy for using apps for SharePoint in your organization. Can users purchase and download apps? How do you make your organization’s apps available? How do you tell if they’re being used?

  • SharePoint Store   Determine whether users can purchase or download apps from the SharePoint Store.
  • App Catalog   Make specific apps for SharePoint available to your users by adding them to the App Catalog.
  • App requests   Configure app requests to control which apps are purchased and how many licenses are available.
  • Monitor apps   Monitor specific apps in SharePoint Server 2013 to check for errors and to track usage.

In the market

How to: Modify a Project System So That Projects Load in Multiple Versions of Visual Studio

avatar[2]

In Visual Studio 2013, you can prevent a project that was created in your custom project system from loading in an earlier version of Visual Studio.

You can also enable that project to identify itself to a later version in case the project requires repair, conversion, or deprecation.

You can mark a project as incompatible with earlier versions of Visual Studio. For example, suppose you create in Visual Studio 2013 a project that uses a .NET Framework 4.5 feature.
Because this project can’t be built in Visual Studio 2010 with SP1, you can mark it as incompatible with Visual Studio 2010 with SP1 to prevent that version from trying to load it.

 

The component that adds the incompatible feature is responsible for marking the project as incompatible. The component must have access to the IVsHierarchy interface that represents the projects of interest.

Marking a Project as Incompatible


You can mark a project as incompatible with earlier versions of Visual Studio. For example, suppose you create in Visual Studio 2013 a project that uses a .NET Framework 4.5 feature. Because this project can’t be built in Visual Studio 2010 with SP1, you can mark it as incompatible with Visual Studio 2010 with SP1 to prevent that version from trying to load it.

 

The component that adds the incompatible feature is responsible for marking the project as incompatible. The component must have access to the IVsHierarchy interface that represents the projects of interest.

To mark a project as incompatible

  1. In the component, get an IVsAppCompat interface from the global service SVsSolution.

    For more information, see SVsSolution.

  2. In the component, call IVsAppCompat.AskForUserConsentToBreakAssetCompat, and pass it an array of IVsHierarchy interfaces that represent the projects of interest.

    This method has the following signature:

    HRESULT AskForUserConsentToBreakAssetCompat([in] SAFEARRAY(IVsHierarchy*) sarrProjectHierarchies) 
    

    If you implement this code, a project compatibility dialog box will appear. The dialog box will asks the user for permission to mark all specified projects as incompatible. If the user agrees, AskForUserConsentToBreakAssetCompat returns S_OK; otherwise, AskForUserConsentToBreakAssetCompat returns OLE_E_PROMPTSAVECANCELLED.

    Caution noteaution
    In most common scenarios, the IVsHierarchy array will contain only one item.

     

  3. If AskForUserConsentToBreakAssetCompat returns S_OK, the component makes or accepts the changes that break compatibility.
  4. In your component, call the IVsAppCompat.BreakAssetCompatibility method for each project that you want to mark as incompatible. The component can set the value of the parameter lpszMinimumVersion to a specific minimum version instead of having Visual Studio look up the current version string in the registry.

    This approach minimizes the risk of the component inadvertently setting a higher value in the future, based on what is in the registry at that time. If that higher value were set, Visual Studio 2013 couldn’t open such a project.

    This method has the following signature:

    HRESULT BreakAssetCompatibility([in] IVsHierarchy * pProjHier), [in] LPCOLESTR lpszMinimumVersion); 
    

    If the component sets lpszMinimumVersion to NULL, the BreakAssetCompatibility method calls the IVsAppCompat.GetCurrentDesignTimeCompatVersion method to obtain a string that represents the current version of Visual Studio.

     

     

     

  5. This method has the following signature:
    RESULT GetCurrentDesignTimeCompatVersion([out] BSTR * pbstrCurrentDesignTimeCompatVersion)
    

     

     

  6. The BreakAssetCompatibility method then calls the IVsHierarchy.SetProperty method to set the root VSHPROPID_MinimumDesignTimeCompatVersion property to the value of the version string that you obtained in the previous step.

 

 Important
You must implement the VSHPROPID_MinimumDesignTimeCompatVersion property to mark a project as compatible or incompatible. For example, if the project system uses an MSBuild project file, add to the project file a <MinimumVisualStudioVersion> build property that has a value equal to the corresponding VSHPROPID_MinimumDesignTimeCompatVersion property value.
A project that is incompatible with the current version of Visual Studio must be kept from loading. Furthermore, a project that is incompatible can’t be upgraded or repaired. Therefore, a project must be checked for compatibility twice: first, when it is being considered for upgrade or repair, and second, before it is loaded.

To enable the detection of project incompatibility, you must implement the UpgradeProject_CheckOnly and CreateProject methods. If a project is incompatible, UpgradeProject_CheckOnly must return the success code VS_S_INCOMPATIBLEPROJECT, and CreateProject must return the error code VS_E_INCOMPATIBLEPROJECT. For flavored projects, you must implement IVsProjectFlavorUpgradeViaFactory2.UpgradeProjectFlavor_CheckOnly instead of IVsProjectUpgradeViaFactory4.UpgradeProject_CheckOnly.

A project system is referred to as flavored if it has a web, Office (VSTO), Silverlight, or other project type built on top of it. Older project systems that already implement IVsProjectUpgradeViaFactory.UpgradeProject_CheckOnly and flavored project systems that already implement IVsProjectFlavorUpgradeViaFactory.UpgradeProjectFlavor_CheckOnly continue to be supported. The older version of IVsProjectUpgradeViaFactory.UpgradeProject_CheckOnly has the following implementation signature:

Copy
IVsProjectUpgradeViaFactory::UpgradeProject_CheckOnly(

/* [in] */ BSTR bstrFileName,
/* [in] */ IVsUpgradeLogger *pLogger,
/* [out] */ BOOL *pUpgradeRequired,
/* [out] */ GUID *pguidNewProjectFactory,
/* [out] */ VSPUVF_FLAGS *pUpgradeProjectCapabilityFlags
)

If this method sets pUpgradeRequired to TRUE and returns S_OK, the result is treated as “Upgrade” and as though the method set an upgrade flag to the value VSPUVF_PROJECT_ONEWAYUPGRADE, which is described later in this topic. The following return values are supported by using this older method but only when pUpgradeRequired is set to TRUE:

  1. VS_S_PROJECT_SAFEREPAIRREQUIRED. This return value translates the pUpgradeRequired value to TRUE as equivalent to VSPUVF_PROJECT_SAFEREPAIR, which is described later in this topic.
  2. VS_S_PROJECT_UNSAFEREPAIRREQUIRED. This return value translates the pUpgradeRequired value to TRUE as equivalent to VSPUVF_PROJECT_UNSAFEREPAIR, which is described later in this topic
  3. VS_S_PROJECT_ONEWAYUPGRADEREQUIRED. This return value translates the pUpgradeRequired value to TRUE as equivalent to VSPUVF_PROJECT_ONEWAYUPGRADE, which is described later in this topic.

The new implementations in IVsProjectUpgradeViaFactory4 and IVsProjectFlavorUpgradeViaFactory2 enable specifying the migration type more precisely.

NoteNote
You can cache the result of the compatibility check by the UpgradeProject_CheckOnly method so that it can also be used by the subsequent call to CreateProject.

For example, if the UpgradeProject_CheckOnly and CreateProject methods that are written for a Visual Studio 2010 with SP1 project system examine a project file and find that the <MinimumVisualStudioVersion> build property is “11.0”, Visual Studio 2010 with SP1 won’t load the project. In addition, Solution Navigator would indicate that the project is “incompatible” and won’t load it.

By using Visual Studio 2013, you can modify most projects that were created in Visual Studio 2010 with SP1 to work in both that version and Visual Studio 2013.

Before a project is loaded, Visual Studio calls the UpgradeProject_CheckOnly method to determine whether the project can be upgraded. If the project can be upgraded, the UpgradeProject_CheckOnly method sets a flag that causes a later call to the UpgradeProject method to upgrade the project. Because incompatible projects can’t be upgraded, UpgradeProject_CheckOnly must first check for project compatibility, as described in the earlier section.

You, as the author of a project system, implement UpgradeProject_CheckOnly (from the IVsProjectUpgradeViaFactory4 interface) to provide users of your project system with an upgrade check.

Visual-Studio-2010-Add-New-Project[1] visual-studio-11-output-directory[1]

When users open a project, this method is called to determine whether a project must be repaired before it is loaded. The possible upgrade requirements are enumerated in VSPUVF_REPAIRFLAGS, and they include the following possibilities:

  1. SPUVF_PROJECT_NOREPAIR: Requires no repair.
  2. VSPUVF_PROJECT_SAFEREPAIR: Makes the project compatible with Visual Studio 2010 with SP1 without the issues that you might have encounter with the previous versions of the product.
  3. VSPUVF_PROJECT_UNSAFEREPAIR: Makes the project compatible with Visual Studio 2010 with SP1 but with some risk of the issues that you might have encountered with previous versions of the product. For example, the project won’t be compatible if it depended on different SDK versions between Visual Studio 2013 and Visual Studio 2010 with SP1.
  4. VSPUVF_PROJECT_ONEWAYUPGRADE: Makes the project incompatible with Visual Studio 2010 with SP1.
  5. VSPUVF_PROJECT_INCOMPATIBLE: Indicates that Visual Studio 2013 doesn’t support this project.
  6. VSPUVF_PROJECT_DEPRECATED: Indicates that this project is no longer supported.
Note Note
To avoid confusion, don’t combine upgrade flags when you set them. For example, don’t create an ambiguous upgrade status such as VSPUVF_PROJECT_SAFEREPAIR | VSPUVF_PROJECT_DEPRECATED.

 

 

Project flavors may implement the function UpgradeProjectFlavor_CheckOnly from the IVsProjectFlavorUpgradeViaFactory2 interface.

To make this function work, the IVsProjectUpgradeViaFactory4.UpgradeProject_CheckOnly implementation mentioned earlier must call it.

 

This call is already implemented in the Visual Basic or C# base project system. The effect of this function enables project flavors to also determine the upgrade requirements of a project, in addition to what the base project system has determined. The compatibility dialog box shows the most severe of the two requirements.

When Visual Studio performs an upgrade check, it provides a logger instead of a NULL value as in previous versions of Visual Studio. The logger enables project systems and flavors to provide additional information that can help you understand the nature of the changes that are needed to make your older projects load properly.

For Managed implementations, the project upgrade interfaces are available in the Microsoft.VisualStudio.Shell.Interop.11.0.dll interop assembly.

The UpgradeProject method can determine whether the changes it makes would prevent the project from loading in an previous version of Visual Studio. If so, the method marks the project as incompatible.

To understand how a project is marked as incompatible, see Marking a Project as Incompatible earlier in this topic. We recommend that, after this dialog box appears, you mark the project as incompatible by calling the method IVsAppCompat.BreakAssetCompatibility directly, instead of first calling the IVsAppCompat.AskForUserConsentToBreakAssetCompat method because the dialog box doesn’t need to appear twice.

Here is an example to help summarize the compatibility user experience. If a project was created in Visual Studio 2010 with SP1, that project is opened in Visual Studio 2012, and Visual Studio 2012 determines that an upgrade is required, Visual Studio displays a dialog box to ask the user for permission to make the changes.

If the user agrees, the project is modified and then loaded. If the project was created by using Visual Studio 2005 or Visual Studio 2008, the solution file will also be upgraded. If the solution is then closed and reopened in Visual Studio 2010 with SP1, the one-way-upgraded project will be incompatible and not loaded (but will load in Visual Studio 2012).

 

If the project had only required a repair (instead of an upgrade), the repaired project will still open in Visual Studio 2010 with SP1, Visual Studio 2012, and Visual Studio 2013.

The call to IVsProjectUpgradeViaFactory::UpgradeProject contains an IVsUpgradeLogger logger, which project systems and flavors should use to provide detailed upgrade tracing for troubleshooting. If a warning or an error is logged, Visual Studio shows the upgrade report.When you write to the upgrade logger, consider the following guidelines:

  • Visual Studio will call Flush after all projects have finished upgrading. Don’t call it in your project system.
  • The LogMessage function has the following ErrorLevels:
    • 0 is for any information that you’d like to trace.
    • 1 is for a warning.
    • 2 is for an error
    • 3 is for the Report formatter. When your project is upgraded, log the word “Converted” once, and don’t localize the word.
  • If a project doesn’t require any repair or upgrade, Visual Studio will generate the log file only if the project system had logged a warning or an error during UpgradeProject_CheckOnly or UpgradeProjectFlavor_CheckOnly methods.

Microsoft Site Templates Upgraded and are now available

 

One of the main goals of the application templates is to provide a demonstration of the application building power in SharePoint and as a potential starting point for larger, more robust applications. While these templates are fully functional and usable out-of-the-box, I’ll be happy to reply on your comments and supporting you as needed.

 

note: those templates were collected from several resources and no source code for them.

All templates are compatible with SharePoint Server 2010 and Foundation Server 2010.

Case Management

The Case Management application template helps case managers track the status and tasks required to complete their work. When a case is created, standard tasks and documents are created which are modified based on the work each case manager has completed.

Clinical Trial Initiation and Management

For those who work in Academic Medical Centers, the Clinical Trial Initiation and Management application template helps teams manage the process of tracking clinical trial protocols, objective setting, subject selection and budget activities.

Employee Activities Site
employee activities
The Employee Activities Site application template helps departments, such as HR and Marketing, manage the creation and attendance of events for employees.

Employee Training Scheduling and Materials

The Employee Training Scheduling and Materials application template helps Instructors add new courses and organize course materials. Employees use the site to schedule attendance at a course, track courses they’ve attended and to provide feedback.

Employee Training 01

Employee Training

Employee Training 03

Absence Request and Vacation Schedule Management

The Absence Request and Vacation Schedule Management application template helps provider departments manage requests for out of office days and provides dashboards showing which users are signed up for a set of responsibilities

Event Planning

The Event Planning application template helps teams organize events efficiently through the use online registration, schedules, communication and feedback.

Discussion Database

The Discussion Database application template provides a location where team members can create and reply to discussion topics.

Team Work Site

The Team Work Site application template provides a place where clinical and business teams, can upload background documents, track scheduled calendar events and submit action items that result from team meetings.

Document Library and Review

The Document Library and Review application template helps people to manage the review cycle common to processes like publication, knowledge management and project plan development.

Knowledgebase

The Knowledgebase application template helps teams manage the information that is resident within their organization. The template enables team members to upload/create documents using Web-based tools and tag them with relevant identifying information.

Policies and Procedures Solution Accelerator

The Policies and Procedures Solution Accelerator assists healthcare organizations create, maintain, publish and easily access policy and procedure information. It also provides the ability to upload documents, maintain a version history and manage tasks.

Board of Directors

The Board of Directors application template provides a single location for an external group of members to store and locate common documents such as quarterly reviews, shareholder meeting notes and annual strategy documents.

Business Performance Reporting

The Business Performance Reporting application template helps health organization managers track the satisfaction of internal customers/patients through a combination of surveys and discussions.

Request for Proposal

The Request for Proposal application template helps manage the process of creating and releasing an initial RFP, collecting submissions of proposals and formally accepting the selected proposal from amongst those submitted.

Compliance Process Support Site

The Compliance Process Support Site application template helps both teams and executive sponsors to manage compliance implementation endeavors, such as HIPAA.

Expense Reimbursement and Approval

The Expense Reimbursement and Approval application template helps manage elements of the expense approval process, including creation and approval. Users can monitor the status of their reimbursement request through a filtered view listing.

Scorecards Solution Accelerator

The Scorecards solution accelerator acts as a template for configuring a management dashboard to track organizational metrics. It contains four example dashboards ranging from a primary care practice to a healthcare organization’s CEO dashboard.

Call Center
call center
The Call Center application template helps departments manage the process of handling customer service requests. The application template helps teams manage service requests from issue identification to cause analysis and resolution.

Help Desk
help desk
The Help Desk application template helps departments manage the process of handling service requests. Team members use the application template to identify a service request, manage identification of the root cause and track solution status.

Physical Asset Tracking and Management

The Physical Asset Tracking and Management application template helps departments, such as Facilities, BioMedical, Surgery, etc. manage requests and the tracking of physical assets.

Inventory Tracking
inventory
The Inventory Tracking application template helps organizations track elements associated with inventory, including creation of inventory. Users are notified when each part reaches the reorder quantity and helps manage customer and supplier information.

Cafeteria Menu Management

The Cafeteria Menu Management application template helps hospital Food & Nutrition staff easily communicate daily menu choices to hospital staff and visitors. It allows staff to develop/schedule menus and provide related nutritional information.

Budgeting and Tracking Multiple Projects

The Budgeting and Tracking Multiple Projects application template helps project teams track and budget multiple, interrelated sets of activities. Management tools such as assignment of new tasks, Gantt Charts and common status designators.

Change Request Management
change request management
The Change Request Management application template helps users track risks associated with a design change. Team members can submit a change request, notifying stakeholders of the risks involved with the change.

IT Team Workspace

The IT Team Workspace application template helps teams manage the development, deployment and support of software projects. It also includes help desk functionality, allowing team members to guide service requests from initiation to resolution.

Project Tracking Workspace

The Project Tracking Workspace application template helps small team projects manage project information in a single location. The application template provides a place where a team can list and view project issues and tasks.

 

 

 

How To : Set up Git on your dev machine (configure, create, clone, add)

Git-Logo[2]Applies to : Visual Studio 2013

When you start using Visual Studio with Git, choose the way that works best for you and the kind of project you’re working on. For example, you can start an experimental solo effort in a new or an existing local repository and continue developing there as long as you want.

Or you can join a collaborative effort in a remote Git repository, hosted either in Team Foundation Server (TFS) or on another service.

Before you start

What do you want to do?

Start from a local repository


You can create a local repository on your dev machine—whether or not you have a network connection—and start developing right away: coding, committing, branching, and merging code. When you’re ready to collaborate with your team, you can publish one or more branches from your local repository into a team project.

Create a new solution under local Git version control


You’ve got an idea for a new app, so you want to experiment on your dev machine. In less than a minute you can use Visual Studio with Git to create a new code project under local version control. (And no Internet required!)Create a new code project (Keyboard: Ctrl + Shift + N). We suggest that you put your new project in c:\Users\YourName\Source\Repos\.

New Project Choose Git Source Control

Put an existing solution under local Git version control


You’ve already got an app in progress and you want to start working on it under local Git version control.
Tip Tip
Before you add the solution to Git version control, we recommend you first move the solution to the TFS Git default location: c:\Users\YourName\Source\Repos\
  1. If you have not already done so, open your solution, (Keyboard: Ctrl + Shift + O) and then open Solution Explorer (Keyboard: Ctrl + Alt + L).
  2. Add your solution to source control.Adding a solution to version control
  3. On the Choose Source Control dialog box, choose Git.
  4. Now that your repository is created, you are ready to commit your files. Go to the Changes page (Keyboard:Ctrl + 0, G) and commit.Open changes page

    Committing the new solution(If you are prompted to configure your user name and email address, do that now. See Configure Git settings.)

    The commit succeeded

 

You can create an empty local repository and add files later. It’s possible to track your changes to the files whether or not they are part of a solution. Or, if you already have a local repository, just start working with it in Visual Studio.Open the Connect page (Keyboard: Press Ctrl + 0, C).

Team Explorer Connect pageTo create an empty local repository, choose New. To open a local repository that already exists on your dev machine, choose Add.

Creating a new local Git repositorySpecify the local path and then choose Create or Add.

Publish your local repository into TFS


When you are ready to share your code and collaborate with your teammates, publish your local repository intoTFS.

  1. Make sure you have committed all your changes in the local repository. See Manage and commit your changes.
  2. If you haven’t already done so, create a new a new team project (choose Git version control) or create a new Git repository in an existing Git team project.
  3. From the Connect page (Keyboard:Ctrl + 0, C), connect to the empty Git repository and publish the local repository to it.Publishing a local repository into TFS

 

Your friends have invited you to work with them on a new project. Or maybe you are setting up a new project or a new dev machine. You can use Visual Studio and Git to collaborate on TFS (on-premises or in the cloud), on CodePlex, or on a third-party service such asGitHub orBitbucket.What do you want to do?

 

If you haven’t already done so, go ahead and create or get access to a Git team project.From Visual Studio: Go to the Team Explorer Connect page (Keyboard: Press Ctrl + 0, C) and then connect to the team project.

Connect to the Git team project(If the team project you want to open is not listed, choose Select Team Projects and then connect to the team project.)

From the web: Open a team project from its home page in your web browser (Keyboard: Ctrl + 0, A).

Open a team project from web accessAfter you connect to the Git team project, if you have not already done so, you must clone it to your dev machine before you can work in it.

Prompt to clone the remote repository

Cloning a Git repository in a team projectJust specify the local path and choose Clone.

Clone a remote Git repository from a third-party service


Does your team have some code in GitHub or another service such as CodePlex or Bitbucket? To start working in Visual Studio, clone the code to your dev machine.Cloning a remote third-party repository

Note Note
You can use Visual Studio’s Git capabilities with services other than TFS. However, if you use these repositories, you will not be able to use TFS features such as project planning and tracking and Team Foundation Build.

 

To customize your Git settings, you must be connected to a local or remote Git repository. Open the Git Settings page.Opening the Git Settings page

  • Apply global settings Apply global Git settings to control aspects of how Git functions for the current user on the dev machine. For example, you can specify how you identify yourself on the changes you commit.
  • Apply repository settings Apply settings to control how Git functions in each individual local repository on your dev machine. For example, you can fine tune how the system blocks clutter from entering your user experience and repository.
  • Apply more settings Visual Studio respects all Git settings but provides you with control over only a few of them. Use the Git command prompt to customize all Git settings.

 

Git global settings User Name and Email Address: Git associates each commit you create with your name and email address. When you start using Visual Studio with Git on your dev machine, if you connect to a Git team project first, then Visual Studio fills in your name and email address for you.Default Repository Location: Specify the default root directory where you want to create or clone new local Git repositories.

Author images: Use images to more easily see the author of each commit.

  • If your Git repo remote origin is in a TFS Git team project, team members can specify their images in their TFS profiles. How? See tips below.
  • If your Git repo remote origin is in a non-TFS Git service (such as CodePlex, GitHub, or Bitbucket), select Enable download of author images from 3rd party source, and then ask team members to set up Gravatar accounts for their email addresses.
Note Note
Enable download of author images from 3rd party source also works for TFS Git team projects in cases where the author has not supplied a profile image.

An example of how author images enhance the collaborative experience:

Git author image examples: branches and history

 

Adding Git repository setting files If your repository does not have settings files, you should probably use Visual Studio to add some default files that apply the most typically useful settings. You’ll avoid distraction and potential clutter in your repository from non-source files such aslocally-built binaries..gitignore file: See Use the Git ignore file to avoid file clutter in your work and in your repository.

.gitattributes file: To specify options such as how the system handles line-breaks, specify a .gitattributes file. See Customizing Git – Git Attributes

 

Commit your repository settings files: In most cases you should commit and push these files so that everyone else on your team uses the same repository settings on their dev machines.

Committing settings file changes

Apply more Git settings


You can specify three kinds of Git settings, listed in order ofsupersedence:

  • Repository settings apply to the work done in the local repository.
  • Global settings apply to the work done by the current user on the dev machine.
  • System settings apply to all work done on the client dev machine. (Visual Studio respects these settings, but does not expose them.)

If you need to modify system settings, or if you prefer the command prompt, then modify your Git settings from there. See Work from the Git command prompt, Customizing Git – Git Configuration, and git-config command.

Q & A


Q: I’m really new to all this. How can I get more help?


A: Follow a step-by-step walkthrough to get started using Git to work locally on a new project and then to begin collaborating with a team on Visual Studio Online.
A: In most cases, it’s best to use a short, understandable folder path. For example: C:\Users\YourName\Source\Repos\FabrikamGit\SolutionName\.Some tips on effective folder names:

  • Keep all folder, sub-folder, and file names short to simplify your work and avoid potential long-path issues that can occur with some types of code projects.
  • Avoid whitespace if you want make command-line operations a little easier to perform.
A: If your Git repo remote origin is in a TFS Git team project, you can specify your images in your TFS profile from your web browser (Keyboard: Ctrl + 0, A).On the Home page, choose Web Access My Profile link on Account menu

A: Yes, any contributor to your team project can claim any user name and any email address they want when authoring a commit. However, TFS does authenticate who pushes the commit. To see who pushed a commit, open your team project in your web browser (Keyboard: Ctrl + 0, A). Open the commit you want to examine from the Commits section, and then expand the commit details.

Commit 'Pushed by" field

A look at : ALM and Lab Environments

What is a lab environment?

A lab environment is a collection of computers that are managed as a single unit, and on which you deploy the system under test along with test software. Here is a typical configuration of machines in a lab environment:

JJ159341.75955053636946458F51D7F086CADE6D(en-us,PandP.10).png

 

Typical lab environment configuration

This one is set up for automated tests of an ice cream vending service. The software product itself consists of a web service that runs on Internet Information Services (IIS) and a database that runs on a separate machine. The tests drive a web browser on a client machine.

With a lab environment, you can run a build-deploy-test workflow in which you can automatically build your system, deploy its components to the appropriate machines in the environment, run the tests, and collect test data. (The fully automated version of this is described in Chapter 5, “Automating System Tests.”)

The workflow is controlled by a test controller with the help of test agents installed on each test machine. The test controller runs on a separate computer.

Now you might ask why you need lab environments, since you could deploy your system and tests to any machines you choose.

Well, you could, but lab environments make several things easier:

  • You can set up automated build-deploy-test workflows. The scripts in the workflow use the lab role names of the machines, such as “Web Client,” so that they are independent of the domain names of the computers.
  • The results of tests can be shown on charts that relate them to requirements.
  • Lab Manager automatically installs test agents on each machine, enabling test data to be collected. Lab Manager manages the test settings of the virtual environment, which define what data to collect.
  • You can view the consoles of the machines through a single viewer, switching easily from one machine to the other.
  • Lab environments manage the allocation of machines to tests for reasons that include preventing two team members from mistakenly assigning the same machine to different tests.

Lab environments come in two varieties. A standard lab environment (roughly equivalent to a physical environment in Visual Studio 2010) can be composed of any computers that you have available, such as physical computers or virtual machines running on third-party frameworks.

An SCVMM environment is made up entirely of virtual machines controlled by System Center Virtual Machine Manager (SCVMM). SCVMM environments provide you with several valuable facilities; they allow you to:

  • Create fresh test environments within minutes. You can store a complete environment in a library and deploy running copies of it. For example, you could store an environment of three machines containing a web client, a web server, and a database. Whenever you want to test a system in that configuration, you deploy and run a new instance of it.
  • Take snapshots of the states of the machines. For example whenever you start a test, you can revert to a snapshot that you took when everything was freshly installed. Also, when you find a bug, you can take a snapshot of the environment for later investigation.
  • Pause and resume all the virtual machines in the environment at the same time.

Standard environments are useful for tests that have to run on real hardware, such as some kinds of performance tests. You can also use them if you haven’t installed SCVMM or Hyper-V, as would be the case if, for example, you already use another virtualization framework. But as you can see, we think there are great benefits to using SCVMM environments.

Stored SCVMM environments

Because you can store them in a library, SCVMM environments help to make your tests repeatable; when you run them for the next build, or when a new release is planned after a six-month break, you can be sure that the tests are running under the same conditions.

JJ159341.94F2E0510A25C5D3BB02389020654B0D(en-us,PandP.10).png

 

A stored SCVMM environment

For example, on Fabrikam’s ice cream sales project, the team often wants to deploy and test a new build of the sales system. It has several components that have to be installed on different machines. Of course, the sales system software is a new build each time. But the platform software, such as operating system, database, and web browser don’t change.

So at the start of the project, the team creates an environment that has the platform software, but no installation of the ice cream system. In addition, each machine has a test agent. The Fabrikam team stores this environment in the library as a template.

Whenever a new build is to be tested, a team member selects the stored platform environment, and chooses Deploy. Lab Manager takes a few minutes to copy and start the environment. Then they only have to install the latest build of the system under test.

While an environment is running, its machines execute on one or more virtualization hosts that have been set up by the system administrator. The stored version from which new copies can be deployed is stored on an SCVMM library server.

Lab management with third-party virtualization frameworks

Some teams have already invested in other virtualization frameworks such as VMware or Citrix XenServer. If that is your situation, the case for switching to Hyper-V and SCVMM might be less clear. But even if you don’t install SCVMM or Hyper-V, you can still use Lab Manager by using standard environments.

With standard environments, you get many of the benefits of lab management, but without the ability to save and quickly set up fresh environments. Instead, you’d have to use your third-party machine manager to set up new machines.

When you assign a machine to a standard environment, Lab Manager will automatically install a test agent and couple it to your test controller. This makes the machine ready for an automatic build-deploy-test workflow and for test data collection. (In Visual Studio 2010, you have to install the test agent manually, but coupling it to the test controller is automatic.)

How to use lab environments

Prerequisites

To enable your team to use lab environments, you first have to set up:

  • Visual Studio Team Foundation Server, with the Lab Manager feature enabled.
  • A test controller, linked to your team project in Team Foundation Server.
  • (Preferable, but not mandatory) System Center Virtual Machine Manager (SCVMM) and Hyper-V.

You only need to set up these things once for the whole team, so we have put the details in the Appendix. If someone else has kindly set up SCVMM, Lab Manager, and a test controller, just continue on here.

Lab center

You manage environments by using Lab Center, which is part of Microsoft Test Manager (MTM). MTM is installed as part of Visual Studio Ultimate or Test Professional. You’ll find it on the Windows Start menu under Visual Studio. If it’s your first time using it, you’ll be asked for the URL of your team project collection. Switch to the Lab Center view (it’s the alternative to Test Center). On the Environments page, you’ll see a list of environments that are in use by your team. Some of them might be marked “in use” by individual team members:

JJ159341.C0DD8E47801E316B77E376F020E16388(en-us,PandP.10).png

 

Managing environments in Lab Center

(Use the Home button if you want to switch to another team project.)

More information is available from the MSDN website topic: Getting Started with Lab Management.

Connecting to a lab environment

If your team has been using lab environments for a while, then when you open Lab Center, you might already see some environments that are available to use. Pick an environment with a status of Ready, without an In Use flag, and that looks as if it has the characteristics you want, which ought to be indicated by its name. Select it and choose Connect.

The Environment View opens. From here you can log into any of the machines in the environment.

JJ159341.551E17E6D88750A24E53C143E0C6CB73(en-us,PandP.10).png

 

The environment view

Typically, a deployed environment will have a recent build of your system already installed. If you’re sure that it’s free for you to use, you could decide to run some tests on it. However, make sure you know your team’s conventions; for example, if the environment’s name contains the name of a team member, ask if it is ok to use.

Using a deployed (running) environment

Log in. Choose the Connect button to open a console view of the environment. From there you can log into any of its machines. More about the Connect button can be found on MSDN in the topic How to: Connect to a Virtual Environment.

Reserve the environment. You can mark it as In Useto discourage other team members from interfering with it. This doesn’t prevent access by others, but simply sets a flag in Lab Center.

Revert a virtual environment to a clean snapshot. In the environment viewer, look at the Snapshots tab. If the Snapshots tab isn’t available, then this is a standard environment composed of existing machines. You might need to make sure that the latest version of your system is installed.

In a virtual environment, the team member who created the environment should have made a snapshot immediately after installing the system under test. Select the snapshot and restore the environment to that state. If there isn’t a snapshot available, that’s (hopefully) because the previous user has already restored it to the clean state. Again, you might need to check the conventions of your team.

Explore and test your system. Now you can start testing your system, which is the topic of the next chapter.

Restore the snapshot when you’re done with a virtual environment, to restore it to the newly installed state. This makes it easier for other team members to use. This option isn’t available for standard environments, so you might want to clean up any test materials that you have created.

Clear the “in use” flag when you’re done.Typically, a team will keep a number of running environments that contain a recent build, and share them. Reusing the environment and restoring it to its initial snapshot is the quickest way of assigning an environment for a test run.

Deploying an environment

If there is no running environment that is suitable for what you want to do, you can look for one in the library. The library contains a selection of stored virtual environments that have previously been created by your colleagues. You can learn more from the topic: Using a Virtual Lab for Your Application Lifecycle, on MSDN.

JJ159341.DDEFBF57250131604C3C6FD0605EAC74(en-us,PandP.10).png

 

The environment library in MTM Lab Center

(If the library isn’t available, that might mean that your team has not set Lab Manager to use SCVMM. But you can still create standard environments, which are made up of computers not controlled by SCVMM. Skip to the section about them near the end of this chapter. Alternatively, you could set up SCVMM as we describe in the Appendix.)

Environments stored in the library are templates; you can’t connect to one because its virtual machines aren’t running. Instead, you must first deploy it. Deploying copies the virtual machines from the library to the virtual machine host, and then starts them.

In MTM, in Lab Center, choose Deploy. Choose an environment from the list. They should have names that help you decide which one you want.

After you have picked an environment, Lab Center takes a few minutes to copy the virtual machines and to make sure that the test agents (which help deploy software and collect data) are running.

Eventually the environment is listed under the Lab tab as Ready (or Running in Visual Studio 2010). Then you’re all set to use it. If it shows up as Not Ready, then try the Repair command. This reinstalls test agents and reconnects them to your test controller. In most cases that fixes it.

Install your system

Typically, stored environments contain installations of the base platform: operating systems, databases, and so on. They don’t usually include an installation of the system under test. Your next step is therefore to install the latest build of your system.

To help choose a good recent build, open the build status report in your web browser. The URL is similar to http://contoso-tfs:8080/tfs/web. Click on Builds. You might have to set the date and other filters. The quality and location of each build is summarized.

In Lab Center, under the Lab tab, select the running environment and choose Connect. Log into the environment’s machines.

Use the installer (typically an .msi file) that is generated by the build process. The location can be obtained from the build status reports. Pick an installer that was generated from the Debug build configuration. You need to put each component on the right machine. Each machine has a role name such as Client, Web Server, or Database, to help you make the right choice.

Later we’ll discuss how you can write scripts to automate the deployment of the system under test.

Review the name you gave to the environment to make sure it reflects the system and build you installed.

Take a snapshot of the environment

Create a snapshot of the environment. This will enable subsequent users to get the environment back to its nice clean state. Do this immediately after you have installed your system, and before you run any tests, other than perhaps a quick smoke test to make sure the installation is OK.

You can create a snapshot either from the Snapshots tab in Environment Viewer, or from the context menu of the environment in the Lab listing.

Use it

After you’ve taken a snapshot, you can start using it as we described earlier. When you’ve finished testing, you can revert to the snapshot.

Delete it (eventually)

Delete an environment when the build it uses is superseded.

Creating a new virtual environment

What if there are no environments in the stored library, or none have the mix of machines you need? Then you’ll have to create one. And if you’re feeling generous, you could add it to the library for other team members to use.

You can either store an environment directly in the library, or you can create it as a running environment and then store it in the library. Storing directly is preferable if you don’t need to configure the constituent virtual machines in any way.

To add a new environment directly to the library, open MTM; choose Lab Center, Library, Environments, and then the New command.

Follow link to expand image

 

Creating a new environment in the library

Alternatively, to create a new running environment that you can store later, choose Lab Center, Lab, and then New. In the wizard, choose SCVMM Environment. (In Visual Studio 2010, the New command has a submenu, New Virtual Environment.)

In either method, you continue through the wizard to choose virtual machines from the library. If your team has been working for a while, there should be a good stock of virtual machines. Their names should indicate what software is installed on them.

Choose library machines that have type Template if they are available. Unlike a plain virtual machine, you can deploy more than one copy of a template. This is because when a template VM is deployed, it gets a new ID so that there are no naming conflicts on your network. Using templates to create a stored environment allows more than one copy of it to be deployed at a time.

JJ159341.11F88DD461E5D5629797A0A64190C808(en-us,PandP.10).png

 

Creating a new virtual environment

You have to name each machine uniquely within your new lab environment. Notice that the name of the computer in the environment is not automatically the same as its name in the domain or workgroup.

You also have to assign a role name to each machine, such as Desktop Client or Web Server. More than one machine can have the same role name. There is a predefined set to choose from, but you can also invent your own role names. These roles are used to help deploy the correct software to each machine. If you automate the deployment process, you will use these names; if you deploy manually, they will just help you remember which machine you intended for each software component.

When you complete the wizard, there will be a few minutes’ wait while VMs are copied.

MTM should now show that your environment is in the library, or that it is already deployed as a running environment, depending on what method of creation you chose to begin with. If it’s in the library, you can deploy it as we described before.

After creating an environment, you typically deploy software components and then keep the environment in existence until you want to move to a new build. Different team members might use it, or you might keep it to yourself. You can mark an environment as “In Use” to discourage others from interfering with it while your work is in progress.

Stored and running machines

The lab manager library can store both individual virtual machines and complete environments. There are command buttons for creating new environments, storing them in the library, and for deploying environments from the library. You have to shut down an environment before you can store it.

JJ159341.915E41C06D69548CC8AE9616387F13A5(en-us,PandP.10).png

 

Stored and deployed environments

Creating and importing virtual machines

You can store individual virtual machines from the test host to the library. Therefore, if your team starts off with a set of virtual machines in the library that include a basic set of platforms—for example, Windows 7 and Windows Server 2008—then you can deploy a machine in an environment, add extra bits, and then store it back in the library.

JJ159341.50A8846DD943403543185DF65F8DC37E(en-us,PandP.10).png

 

System Center Virtual Machine Manager (SCVMM)

But how do you create those first virtual machines? For this you need to access SCVMM, on which Lab Manager is based. It’s typically an administrator’s task, so you’ll find those details in the Appendix. Briefly:

  1. You can create a new machine in the SCVMM console and then install an operating system on it, either with your DVD or from your corporate PXE server.
  2. Every test machine needs a copy of the Team Foundation Server Test Agent, which you can get from the Team Foundation Server installation DVD.
  3. Use the SCVMM console to store the VM in the library as a template. This is preferable to storing it as a plain VM.
  4. In Lab Manager, use the Import command on the Library tab in order to make the SCVMM library items visible in the Lab Center library.

JJ159341.84682E4E4DA4A8940F44677579E69D0A(en-us,PandP.10).png

 

How environments are managed

Composed environments

A composed environment is made up of virtual machines that are already running. When you compose an environment from running machines, they are assigned to your environment; when you delete the environment, they are returned to the available pool. You can create a composed environment very quickly because there is no copying of virtual machines.

We recommend composed environments for quick exploratory tests of a recent build. The team should periodically place new machines in the pool on which a recent build is installed. Team members should remember to delete composed environments when they are no longer using them.

JJ159341.B7D97B097F6BCBF97980D943DEF5EEA8(en-us,PandP.10).png

 

Composed environments

In Visual Studio 2012, you make a composed environment the same way you create a virtual environment: by choosing New and then SCVMM environment. In the wizard, you’ll see that the list of available machines includes both VM templates and running pool machines. If you want, you can mix pool machines and freshly created VMs both in the same environment. For example, you might use new VMs for your system under test, and a pool machine for a database of test data, or a fake of an external system. Because the external system doesn’t change, there is no need to keep creating new versions of it.

In Visual Studio 2010, use the New Composed Environment command and choose machines from the list.

Standard environments

Standard environments are made up of existing computers. They can be either physical or virtual machines, or a mixture. They must be domain-joined.

You can create standard environments even if your team hasn’t yet set up SCVMM. For example, if you are already using VMware to run virtual machines and don’t want to switch to Hyper-V and SCVMM, you can use Lab Manager to set up standard environments. You can’t stop, start, or take snapshots of standard environments, but Lab Manager will install test agents on them and you can use them to run a build-deploy-test workflow.

You can also use standard environments when it is important to use a real machine—for example, in performance tests.

To create a standard environment, click New and then choose Standard Environment.

(In Visual Studio 2010, choose New Physical Environment. You must manually install test and lab agents on the computers. These agents can be installed from the Team Foundation Server DVD.)

For an example, see Lab Management walkthrough using Visual Studio 11 Developer Preview Virtual Machine on the Visual Studio Lab Management team blog.

Summary

There’s a lot of pain and overhead in configuring physical boxes to build test environments. The task is made much easier by Visual Studio Lab Manager, particularly if you use virtual environments.

With Lab Manager you can:

  • Manage the allocation of lab machines, grouping them into lab environments.
  • Configure machines for the collection of test data.
  • Rapidly create fresh virtual environments already set up with a base platform of operating system, database, and so on.

Differences between Visual Studio 2010 and Visual Studio 2012

  • System Center Virtual Machine Manager 2012. Lab Management in Visual Studio 2012 works with SCVMM 2012 in addition to SCVMM 2008.
  • Standard environments. Lab Manager in Visual Studio 2012 is easier to use with third-party virtualization frameworks as well as physical computers. It will install test agents if necessary.
  • Test agents. In Visual Studio 2010, you must install test and lab agents on the machines that you want to use in the lab. In Visual Studio 2012, there is only one type of agent, and it is installed automatically by Lab Manager on each of the machines in a lab environment. You can still install the test agent yourself to save time when lab environments are created.
  • Compatibility. Most combinations of 2010 and 2012 RC products work together. For example, you can create environments on Visual Studio Team Foundation Server 2010 using Microsoft Test Manager 2012 RC.

JJ159341.133AB26540FC407EBE448F1A9C7558A7(en-us,PandP.10).png

Latest SharePoint 2013 Resources

Introduction


Best practices are, and rightfully so, always a much sought-after topic. There are various kinds of best practices:

 

•Microsoft best practices. In real life, these are the most important ones to know, as most companies implementing SharePoint best practices have a tendency to follow as much of these as possibly can. Independent consultants doing architecture and code reviews will certainly take a look at these as well. In general, you can safely say that best practices endorsed by Microsoft have an added bonus and it will be mentioned whenever this is the case.

 
•Best practices. These practices are patterns that have proven themselves over and over again as a way to achieve a high quality of your solutions, and it’s completely irrelevant who proposed them. Often MS best practices will also fall in this category. In real life, these practices should be the most important ones to follow.

 
•Practices. These are just approaches that are reused over and over again, but not necessarily the best ones. Wiki’s are a great way to discern best practices from practices. It’s certainly possible that this page refers to these “Practices of the 3rd kind”, but hopefully, the SharePoint community will eventually filter them out. Therefore, everybody is invited and encouraged to actively participate in the various best practices discussions.
This Wiki page contains an overview of SharePoint 2013 Best Practices of all kinds, divided by categories.

Performance

This section discusses best practices regarding performance issues.
•http://gallery.technet.microsoft.com/The-SharePoint-Flavored-5b03f323     , the SharePoint Flavored Weblog Reader (SFWR) helps troubleshooting performance problems by analyzing the IIS log files of SharePoint WFEs.
•http://gallery.technet.microsoft.com/office/PressurePoint-Dragon-for-87572ee1   , PressurePoint Dragon for SharePoint 2013 helps executing performance tests.
•http://gallery.technet.microsoft.com/Maxer-for-SharePoint-2013-52208636     , a tool for checking capacity planning limits.
•http://gallery.technet.microsoft.com/Ping-Dragon-for-SharePoint-70fb299e   , a command line tool for pinging SharePoint and getting the response time of a SharePoint page.
•http://gallery.technet.microsoft.com/WinPing-Dragon-for-eefb6dd3   , a WPF client for  for pinging SharePoint and getting the response time of a SharePoint page.
•http://social.technet.microsoft.com/wiki/contents/articles/16218.sharepoint-2013-best-practices-in-depth-performance-counters.aspx , in depth info about performance counters relevant to SharePoint 2013.
•http://technet.microsoft.com/en-us/library/ff758658.aspx   , TechNet performance monitoring tips.
•http://www.iis.net/downloads/community/2007/05/wcat-63-(x64)   , the Web Capacity Analysis Tool (WCAT) is a lightweight HTTP load generation tool to measure the performance of a web server. Used by MS support in various capacity analysis plans.
•Improve SharePoint Speed by fixing a SSL Trust Issue,  http://sharepoint-community.net/profiles/blogs/how-to-improve-speed-on-sharepoint-2013
•http://technet.microsoft.com/en-us/library/cc262813.aspx   , Large Lists.
•http://technet.microsoft.com/en-us/library/hh395916.aspx   , Estimating performance and capacity.

SharePoint Server 2013 Build Numbers

 

Version Build # Type Server
Package (KB) Foundation
Package (KB) Language
specific Notes
Public Beta Preview   15.0.4128.1014 Beta n/a n/a yes Known issues
SPS 2013   RTM 15.0.4420.1017 RTM n/a n/a yes Setup, Install
Dec. 2012 Fix 15.0.4433.1506 update 2752058
2752001   n/a yes Known Issue
March 2013   15.0.4481.1005 PU 2767999   2768000   global New Baseline
April 2013    15.0.4505.1002 CU – 2751999   global Known Issue
April 2013   15.0.4505.1005 CU 2726992   – global Known Issue
June 2013   15.0.4517.1003 CU   2817346   global Known Issue   1
Known Issue 2
June 2013   15.0.4517.1005 CU 2817414   – global Known Issue 1  Known Issue 2
August 2013   15.0.4535.1000 CU 2817616   2817517   global –
October 2013   15.0.4551.1001 CU   2825674   global –
October 2013   15.0.4551.1005 CU 2825647     global –
December 2013   15.0.4551.1508 CU   2849961   global –
December 2013   15.0.4551.1511 CU 2850024     global see KB
Feb. 2014 – skipped! n/a – – – – –
SP1-released Apr.2014   15.0.4569.1000
(15.0.4569.1506) SP 2817429

2880552   –   yes

Re-released SP

SP1-released Apr.2014
(15.0.4569.1509)
fixed Build:
15.0.4571.1502
SP  –
2817439
2760625   – Fix
2880551   – Current
yes

Known Issue

Re-released SP

April 2014   15.0.4605.1004 CU 2878240   2863892   global Known Issue
MS14-022 15.0.4615.1001 PU 2952166   2952166   n/a Security fix
June 2014   15.0.4623.1001 CU 2881061   2881063   global n/a

reference: http://blogs.technet.com/b/steve_chen/archive/2013/03/26/3561010.aspx

Feature Overview

This section discusses best places to get SharePoint feature overviews.
•http://www.apps4rent.com/sharepoint-2013-features-comparison.html   , nice feature comparison.
•http://technet.microsoft.com/en-us/library/jj819267.aspx   , extensive SharePoint Online overview.
•http://technet.microsoft.com/en-us/library/ff607742(v=office.15).aspx   , deprecated features.
•http://www.andrewconnell.com/blog/archive/2013/01/11/sharepoint-2013-amp-office-365-feature-matrixndashan-easier-way-to.aspx   , matrix overview.
•http://www.rharbridge.com/www.rharbridge.com/?page_id=966   , nice overview including SharePoint 2013, 2010, 2007, and Office 365.
•http://www.fpweb.net/sharepoint-hosting/2013/compare-sharepoint-server-standard-enterprise/   , 2013 standard vs enterprise.
•http://www.khamis.net/Blog/Post/275/SharePoint-2013-Standard-vs–Enterprise-vs–Foundation-Feature-Comparison-Matrix  , 2013 standard vs enterprise vs foundation.
•http://blog.blksthl.com/2013/01/14/sharepoint-2013-feature-comparison-chart-all-editions/#SIT   , overview of all 2013 versions.

Capacity Planning
•http://technet.microsoft.com/en-us/library/cc261834.aspx   , excellent planning resource.
•http://technet.microsoft.com/en-us/library/cc263199.aspx   , overview of various technical diagrams.
•http://technet.microsoft.com/en-us/library/jj219628.aspx#HW_Enterprise   , info about scaling search.
•http://technet.microsoft.com/en-us/library/cc262787.aspx   , capacity boundaries.

Installation

This section discusses installation best practices.
•http://social.technet.microsoft.com/wiki/contents/articles/15289.sharepoint-2013-best-practices-creating-a-development-environment.aspx , provides a detailed explanation how to create a SharePoint 2013 development environment.
•http://technet.microsoft.com/en-us/library/cc262749.aspx   , system requirements overview.
•http://technet.microsoft.com/en-us/library/ee662513.aspx   , provides an overview of the administrative and service accounts you need for a SharePoint 2013 installation.
•http://technet.microsoft.com/en-us/library/cc678863.aspx   , describes SharePoint 2013 administrative and service account permissions for SQL Server, the File System, File Shares, and Registry entries.
•http://social.technet.microsoft.com/wiki/contents/articles/14500.sharepoint-2013-best-practices-service-accounts.aspx , naming conventions and permission overview for service accounts.
•http://www.slideshare.net/michaeltnoel/spcsea-2013-upgrading-to-sharepoint-2013  , a methodical approach to upgrading to SharePoint 2013.
•http://autospinstaller.codeplex.com/   , Automated SharePoint 2010/2013 installation using PowerShell and XML configuration.
•http://autospinstallergui.codeplex.com/   , GUI tool for configuring the AutoSPInstaller configuration XML.
•http://social.technet.microsoft.com/wiki/contents/articles/16343.sharepoint-2013-best-practices-setting-up-a-dev-environment-for-windows-apps-and-sharepoint.aspx , describes how to set up a dev environment needed for creating Windows Apps that leverage SharePoint.
•http://technet.microsoft.com/en-us/library/jj658588.aspx   , installing workflows.
•Install SharePoint 2013 on a single server with SQL Server
•Install SharePoint 2013 on a single server with a built-in database
•Install SharePoint 2013 across multiple servers for a three-tier farm
•Install and configure a virtual environment for SharePoint 2013
•Install or uninstall language packs for SharePoint 2013
•Add web or application servers to farms in SharePoint 2013
•Add a database server to an existing farm in SharePoint 2013
•Remove a server from a farm in SharePoint 2013
•Uninstall SharePoint 2013
•Install and configure a virtual environment for SharePoint 2013

Upgrade and Migration

This section discusses how to upgrade to SharePoint 2013 from a previous version.
•http://blogs.msdn.com/b/russmax/archive/2013/04/01/why-sharepoint-2013-cumulative-update-takes-5-hours-to-install.aspx?CommentPosted=true#commentmessage   Why SharePoint 2013 Cumulative Update takes 5 hours to install, improve CU (patch) Installation times from 5 hours to 30 mins.
•http://social.technet.microsoft.com/wiki/contents/articles/15743.sharepoint-2013-best-practices-upgrading-from-sharepoint-2007.aspx discusses best practices for upgrading from SharePoint 2007 to 2013.
•http://social.technet.microsoft.com/wiki/contents/articles/16033.sharepoint-2013-best-practices-migrate-from-sharepoint-foundation-2013-to-sharepoint-server-2013.aspx , upgrade SharePoint Foundation 2013 to SharePoint Server 2013.
•http://technet.microsoft.com/en-us/library/cc262483.aspx   , SharePoint 2010 to 2013.
•http://technet.microsoft.com/en-us/library/cc303436.aspx   , upgrade databases from SharePoint 2010 to 2013.
•http://www.google.nl/url?sa=t&rct=j&q=download%20proven%20practices%20for%20upgrading%20or%20migrating%20to%20sharepoint%202013&source=web&cd=1&ved=0CEgQFjAA&url=http%3A%2F%2Feu.avepoint.com%2Fassets%2Fpdf%2Fwhite-papers%2Femea%2FSharePoint-2013-Migration-White-Paper.pdf&ei=L2FRUdPHJoqX1AWy44CgBw&usg=AFQjCNHA6Iuoigex0xyHb-EuPdBDIiLrhw&bvm=bv.44158598,d.d2k   , PDF document containing extensive info about Proven Practices for Upgrading or Migrating to SharePoint 2013.
•http://technet.microsoft.com/en-us/library/ee947141.aspx   , upgrade from sharepoint 2007 or wss 3 to sharepoint 2013.

Infrastructure

This section discusses infrastructure best practices.
•http://technet.microsoft.com/en-us/library/cc263199(v=office.15)   , infrastructure diagrams.
•http://social.technet.microsoft.com/wiki/contents/articles/16180.sharepoint-2013-best-practices-dealing-with-geographically-dispersed-locations.aspx , dealing with geographically dispersed locations.

Backup and Recovery
This section deals with best practices about the back up and restore of SharePoint environments. •http://technet.microsoft.com/en-us/library/ee663490.aspx   , general overview of backup and recovery.
•http://technet.microsoft.com/en-us/library/ee428315.aspx   , back-up solutions for specific parts of SharePoint.
•http://www.slideshare.net/thomasvochten/sharepoint-high-availability-disaster-recovery   , good info about disaster recovery.
•http://technet.microsoft.com/en-us/library/cc748824.aspx   , high availability architectures.
•http://social.technet.microsoft.com/wiki/contents/articles/17195.sharepoint-2013-best-practices-back-up-sharepoint-online.aspx , how to back up SharePoint online?

Database
•http://technet.microsoft.com/en-us/library/cc678868.aspx   , great resource about SharePoint databases.
•http://technet.microsoft.com/en-us/library/ff851878.aspx   , removing ugly GUIDs from SharePoint database names.

Implementation and Maintenance

This section deals with best practices about implementing SharePoint.
•http://social.technet.microsoft.com/wiki/contents/articles/6575.ten-steps-to-a-successful-sharepoint-implementation-en-us.aspx explains how to implement SharePoint.
•http://technet.microsoft.com/en-us/library/ff851878.aspx   , rename service applications.

Apps

This section deals with best practices regarding SharePoint Apps.
•http://technet.microsoft.com/en-us/library/fp161237(v=office.15).aspx   , great resource for planning Apps.
•http://msdn.microsoft.com/en-us/library/jj163230.aspx  ,  a resource for building apps for SharePoint.
•http://msdn.microsoft.com/en-us/library/jj163264.aspx   , Best practices and design patterns for app license checking.

Every day use
•http://social.technet.microsoft.com/wiki/contents/articles/16166.sharepoint-2013-best-practices-using-folders.aspx , using folders
•http://social.technet.microsoft.com/wiki/contents/articles/17829.sharepoint-2013-going-up-in-the-navigation.aspx , discusses options for navigating up
•http://social.technet.microsoft.com/wiki/contents/articles/17997.sharepoint-2013-best-practice-choosing-between-a-choice-lookup-or-taxonomy-managed-metadata-column.aspx , discusses best practices for choosing between choice, lookup or taxonomy column

Add-ons

This section deals with useful SharePoint add-ons.
•http://www.infragistics.com/products/sharepoint/  , an collection of web parts for an enterprise dashboard.
•http://harmon.ie/Products/Mobile  , an app for iPhone/iPad that enhances mobile access to SharePoint documents.

Development
This section covers best practices targeted towards software developers. •http://social.technet.microsoft.com/wiki/contents/articles/13373.sharepoint-2013-what-to-do-farm-solution-vs-sandbox-vs-app.aspx , discusses when to use farm solutions, sandbox solutions, or sharepoint apps.
•http://social.technet.microsoft.com/wiki/contents/articles/13637.sharepoint-2013-best-practices-what-client-api-should-you-choose-when-building-apps.aspx , guidelines to help you pick the correct client API to use with your app.
•http://msdn.microsoft.com/en-us/library/jj164060(v=office.15).aspx   , guidelines to help you pick the correct client API for your SharePoint solution.
•http://social.technet.microsoft.com/wiki/contents/articles/16343.sharepoint-2013-best-practices-setting-up-a-dev-environment-for-windows-apps-and-sharepoint.aspx , describes how to set up a dev environment needed for creating Windows Apps that leverage SharePoint.
•http://social.technet.microsoft.com/wiki/contents/articles/16353.sharepoint-2013-best-practices-working-with-connection-strings-in-auto-hosted-sharepoint-apps.aspx , discusses how to deal with connection strings in auto-hosted apps.

Debugging

This section contains debugging tips for SharePoint.
•Use WireShark to capture traffic on the SharePoint server.
•Use a Text Differencing tool to compare if web.config files on WFEs are identical.
•Use Fiddler to monitor web traffic using the People Picker. This will provide insight in how to use the people picker for custom development. Please note: the client People Picker web service interface is located in SP.UI.ApplicationPages.ClientPeoplePickerWebServiceInterface.

Troubleshooting
•Troubleshooting Office Web Apps
•http://social.technet.microsoft.com/wiki/contents/articles/16640.sharepoint-2013-tips-for-troubleshooting-search-suggestions.aspx , troubleshooting search suggestions.
•http://technet.microsoft.com/en-us/library/jj906556.aspx   , troubleshooting claims authentication.
•http://technet.microsoft.com/en-us/library/dn169566.aspx   , troubleshooting fine grained permissions.
•http://social.technet.microsoft.com/Forums/sharepoint/en-US/02b78299-bc7f-448b-b233-f9cae0da8466/sharepoint-2013-alerts-are-not-firing-any-mails-for-the-normal-alerts-and-search-alerts-can-someone , troubleshooting email alerts.

Farms

This section discusses best practices regarding SharePoint 2013 farm topologies.
•Office Web Apps topologies
•How to configure SharePoint Farm
•How to install SharePoint Farm
•Overview of farm virtualization and architectures

Accessibility

This section discusses SharePoint accessibility topics.
•http://office.microsoft.com/en-us/sharepoint-foundation-help/keyboard-shortcuts-for-sharepoint-products-HA102772894.aspx   , shortcuts for SharePoint.
•http://technet.microsoft.com/en-us/library/ff852108.aspx   , conformance statement A-level (WCAG 2.0).
•http://technet.microsoft.com/en-us/library/ff852107.aspx   , conformance statement AA-level (WCAG 2.0).

Top 10 Blogs to Follow
It’s certainly a best practice to keep up to date with the latest SharePoint news. Therefore, a top 10 of blog suggestions to follow is included. 1.Corey Roth at http://www.dotnetmafia.com/blogs/dotnettipoftheday/
2.Jeremy Thake at http://jeremythake.com
3.Nik Patel at http://nikspatel.wordpress.com/
4.Yaroslav Pentsarskyy at http://www.sharemuch.com/
5.Giles Hamson at http://spandps.com/author/ghamson/
6.Danny Jessee at http://www.dannyjessee.com/blog/
7.Marc D Anderson at http://sympmarc.com/
8.Andrew Connell at http://www.andrewconnell.com/blog
9.Geoff Evelyn at http://www.sharepointgeoff.com/
10.http://sharepointdragons.com /  , Nikander & Margriet on SharePoint.

Recommended SharePoint Related Tools

What to put in your bag of tools?
1.http://gallery.technet.microsoft.com/The-SharePoint-Flavored-5b03f323    , the SharePoint Flavored Weblog Reader (SFWR) helps troubleshooting performance problems by analyzing the IIS log files of SharePoint WFEs.
2.http://gallery.technet.microsoft.com/PressurePoint-Dragon-for-87572ee1   , PressurePoint Dragon for SharePoint 2013 helps executing performance tests.
3.http://gallery.technet.microsoft.com/Maxer-for-SharePoint-2013-52208636   , a tool for checking capacity planning limits.
4.http://visualstudiogallery.msdn.microsoft.com/36a6eb45-a7b1-47c3-9e85-09f0aef6e879    , Muse.VSExtensions, a great tool for referencing assemblies located in the GAC.
5.http://www.quest.com/powergui-freeware/   , helps with all your PowerShell development. In a SharePoint environment, there usually will be some.
6.http://powerguivsx.codeplex.com/   , Visual Studio extension based on PowerGUI that adds PowerShell IntelliSense support to Visual Studio.
7.http://visualstudiogallery.msdn.microsoft.com/4784e790-32f4-455f-9228-53f537c03787   , FishBurn Systems provides some sort of CKSDev lite for VS.NET 2012/SharePoint 2013. Very useful.
8.http://visualstudiogallery.msdn.microsoft.com/6ed4c78f-a23e-49ad-b5fd-369af0c2107f   , web extensions make creating CSS in VS.NET a lot easier and supports CSS generation for multiple platforms.
9.http://technet.microsoft.com/en-us/library/cc508851  , the SharePoint 2010 Administration Toolkit (works on 2013).
10.http://clumsyleaf.com/products/cloudxplorer   , a great tool when you’ve installed your SharePoint farm on Azure.

Training

If you want to learn about SharePoint 2013, there are valuable resources out there to get started.
•http://technet.microsoft.com/en-us/sharepoint/fp123606.aspx%20  , basic training for IT Pros.
•http://www.microsoft.com/en-us/download/details.aspx?id=35396   , free eBook.
•www.MicrosoftVirtualAcademy.com   , great resource with advanced online and interactive sessions.
http://technet.microsoft.com/en-us/library/gg609831.aspx   , at the end there’s a nice overview of training resources.

See Also
•SharePoint 2013 Portal
•SharePoint 2013 – Service Applications
•SharePoint 2013 – Resources for Developers
•SharePoint 2013 – Resources for IT Pros

 

How To : 8 Steps for a successful SharePoint Change Management

As with virtually any other significant IT implementation project, a SharePoint deployment is as dependent on people as it is on technology for its success. If your system end users are in fact not using the system, or are not using it correctly or to its full potential, you will never achieve that all ‑ important return on investment.

dsadsa

ImageGen[1]            hero-for-hire_basic-layout_600

One hundred percent adoption by users who are proficient with SharePoint and are committed to gaining the greatest value from the software should be a key objective for all SharePoint project leaders. To bring this about it is crucial to develop and execute an effective change management strategy as a key component of your SharePoint implementation.

At Professional Advantage we have had great success with our SharePoint clients by informally following a change management process that was thought up by Dr John Kotter, an American professor whose 1966 book, Leading Change, is still highly influential in the world of change management theory.

In his book Dr Kotter puts forward an eight step process that change leaders can follow to avoid failure and adjust successfully to change. These steps, which can be usefully applied to a SharePoint deployment project, are:

 

1. Create a sense of urgency

No SharePoint project will get off the ground, let alone become successful, if there is no buy-in at the executive level. Here it is important to put a strong case forward as to why the move to SharePoint is very much in the organisation’s best interests. Begin by doing lots of research. Examine, for example, the competitive disadvantages that will be suffered if no change is made. Also highlight those business functions and processes within the organisation that could be significantly improved with SharePoint. Tie the benefits of SharePoint to the organisation’s broad business goals and ongoing strategic objectives. Explain as persuasively as possible why the current situation is unsustainable and why, when it comes to moving to SharePoint, it’s a case of ‘the sooner the better.’ The stronger your business case for a SharePoint implementation, the more likely it is that it will get the green light.

 

2. Create a guiding coalition

Once you’ve received the go-ahead for the SharePoint deployment the next step is to put together a coalition of people with the power and commitment to lead the change. This team will ideally be comprised of a wide variety of motivated individuals: department managers, technical experts and those at the coalface who will be using SharePoint on a day-to-day basis should all form part of the coalition. They should also be people who have grasped the urgency of the task ahead, who understand the business goals that will be achieved with a successful implementation and who recognise that 100% user adoption is a central goal of the project.

Crucial to the success of the coalition’s efforts is that its members all work well together. As the project evolves these change-drivers will be sharing ideas, making decisions and identifying and solving problems. Team members must be able to trust each other and collaborate effectively; if this does not occur the project will almost certainly stall.

 

3. Develop a change vision

By developing a clear vision for the project you give those involved a direction to follow and a goal to achieve. Ideally the vision will be easy to comprehend, achievable, flexible and something that all stakeholders can get enthusiastic about.

While the vision will by definition be broad, the strategies that underpin it will be specific. Priorities for the project should be defined and acted upon, with priority given to ‘low hanging fruit’, ie tasks that can be easily achieved and which will deliver visible, measurable and meaningful change within the organisation. This approach will add momentum to the project by enabling stakeholders to gain a real-world perspective on the changes that are in progress and why they’re good both for the organisation and for individual SharePoint users.

 

4. Communicate the vision for buy-in

Communicating your vision and promoting the behavioural changes that will drive it are critical for a successful SharePoint deployment. This step requires a top-down communications strategy that is consistent, creative, inspiring and ongoing.

At Professional Advantage our communication strategy forms part of our SharePoint adoption plan and includes a variety of tactics designed to get staff using SharePoint, and using it properly. In the past such tactics have included SharePoint launch parties, lunch sessions, system design competitions amongst staff, social media, blogs and the putting up of posters around the office promoting the use of SharePoint. The objective here is, of course, to get users educated and engaged. The more creative you are, the better. And always keep in mind that user adoption will likely be low unless you can answer the ‘What’s in it for me’ question.

 

5. Empower broad-based action

To achieve the highest possible level of SharePoint user adoption it’s best to remove any barriers that might impede that objective. This particularly applies to the laggards, ie those who are most resistant to change and least likely to make full use of the system.

Typically this will involve removing software and other technologies that make it easy for workers to continue doing things the old way. Too often organisations include this as an afterthought, resulting in smaller and slower user adoption. Here it is important to plan from the beginning, anticipate what systems will be made redundant (or scaled down) and schedule that in to the SharePoint implementation plan.

Also important here is encouragement from above. Supported by proper ongoing training, those who will be using SharePoint need to be encouraged to step out of their comfort zone and embrace the new system.

 

6. Celebrate short-term wins

Short-term wins are essential to the success of your SharePoint deployments, as are the active celebration of these wins when they occur. The transition to a SharePoint environment is a long-term process and momentum must be maintained every step of the way. Perhaps, as a result of SharePoint, a new level of intra-office collaboration has been achieved, or the organisation has experienced dramatic time savings with particular processes, or has achieved new standards of compliance. Whatever the win, the broadcasting of it should form part of the SharePoint communications plan. If people can see how and why SharePoint is working, they will be more likely to embrace the system and, in so doing, contribute to the achievement of the organisation’s business goals.

 

7. Consolidate gains and generate more change

You’ve scored some wins and people are now comfortable using SharePoint. While that is a wonderful thing, the danger at this stage is complacency. Rather than take your foot off the accelerator it’s important to build on what’s been achieved and pursue larger, more ambitious objectives. To fully ingrain SharePoint into your organisation’s culture (and to avoid regression) ramp things up with new projects and initiatives.

 

8. Making it stick

To fully embed SharePoint into your organisation’s culture and business practices everyone needs to be on board. Just as during a life-threatening cyclone there are always some residents who refuse to heed advice to leave town, with a SharePoint deployment there will always be some who are unwilling to move. Here it is important to reinforce, and continue to celebrate, the victories that have been achieved and communicate how important it is that everyone adopt the system.

As the SharePoint project continues to evolve so too will its vision and purpose. With the right planning and execution, and with the right leadership, people will, over time, forget the old ways of doing things and fully embrace the new.

Great Agile Develipment Tool – Agile Planner

Great Agile Development project – /http://agileplanner.codeplex.com/

Project Description

This project is to develop an interation planning tool for agile project management.

What’s new?

Release: 1.0.0, run in Visual Studio integrated mode. See how to use for details.

What is Agile Planner?

This tool is for agile project teams, who currently are using sticky notes on the wall. With this tool stories, backlog and iterations are managed in a graphic designer, saved as files within visual studio projects and can be exported to images, reports and etc.

Main features are

  • Stories can be drag and dropped between backlog and iterations
  • Iteration’s capacity calculated automatically base on stories within it
  • Stories are rendered base on customizable status or priority color schema
  • Diagram can be exported to PNG image for printing, documentation and sharing

Here are examples.

Stories rendered based on status
agile-stories-on-status-s.png

Stories rendered base on priority
agile-stories-on-priority-s.png

 

How to use Agile Planner

1. Install
To install Agile Planner,

  • Download the runtime binary zip file from the latest releases
  • Extract all files from the runtime binary zip file
  • Run the windows installer AgilePlanner.msi (requires elevated command prompt under Vista and administrator on XP/2003).

2. Get Started
To start use Agile Planner in Visual Studio 2008 project

  • Start Visual Studio 2008, create new project or load existing project
  • Right click the project name and select menu “Add | New Item …”

add-new-item.PNG

  • Select AgilePlanner
  • Dismiss the security warning if it shows up

You will be presented a designing environment like below.

designer.PNG

  1. Graphical designer for iterations and stories
  2. Toolbox for iterations and stories
  3. Treeview Explorer for iterations and stories
  4. Property window for iterations and stories

3. Plan your project’s iterations

  • Create iterations by dragging iteration tool from toolbox to the graphical designer
  • Create Stories by dragging story tool from toolbox to the backlog and iterations
  • Edit stories’ properties such as name, capacity, priority and status the property window

add-stories.PNG

Notice: the capacity of iteration updated automatically after dragged stories between iterations and after updated the stories’ capacity property, so that you can balance the work load between iterations.

4. Render the diagram
The stories can be colored based on either their status or priority. To switch between these two options, right click the diagram and select menu “Color on Status” or “Color on Priority”. The color schema are customizable as the property of the project.

render-in-priority.PNG

5. Export
The rendered diagram can be exported to png file by right clicking on the diagram and select menu “Export to image”.

exported.png

How to use it?

See how to use