All posts by Sharepoint Samurai

Senior C# & SharePoint Developer with 10 year's development experience BSC degree in Computer Science and Information Systems 5 years experience in delivering SharePoint based solutions using OOB functionality and Custom Development Extensive experience in • Microsoft SharePoint platform, App Model (2010 & 2013) • C# 2.0 - 4.5 • Advanced Workflow (Visual Studio, K2, Nintex) • Development of Custom Web Parts • Master Page Dev & Branding • Integration of Back-end systems, including 3 SAP Projects, MS CRM, K2 BlackPearl, Custom LOB Systems • SQL Server (design,development, stored procedures, triggers) • BCS, BDC - Implementing WCF, REST Services, Web Services • SharePoint Excel Services, PowerPivot, Word Automation Services • Custom Reports (MS SQL Reporting, Crystal Reports) • Objected Oriented Programming and Patterns • TFS 2010-2013 • Agile & SCRUM methodologies (ALM / SDLC) • Microsoft Azure as database and hosting hybrid solutions • Office 365 and SharePoint App Development • Perform architecture design, development, testing, implementation, and documentation of • SharePoint 2010 and 2013 sites and applications as well as .Net web applications and Services • Analyze business/functional requirements and translate them into concrete tasks • Team player and able to coach/train more junior profiles • Proactive, client oriented, result oriented, 'can do' mentality • Test automation, Test Driven Development • Create and Interpret written business requirements and technical specification documents. • Understand and assess business requirements • Translate business requirements into technical requirements • Design solutions to cover the business requirements by identifying and recommending the technologies that should be used and integrated in every case in accordance to the best practices for architectural design • Estimate the cost and time associated with the development and implementation of the solutions designed https://sharepointsamurai.wordpress.com SharePoint Samurai Blog - Everything about SharePoint, Office365, Azure, the Cloud and SharePoint Integration with SAP & MS CRM SharePoint Samurai Blog - Everything about SharePoint, Office365, Azure, the Cloud and SharePoint Integration with SAP & MS CRM Duet Enterprise Duet Enterprise Private Consultant developing various SharePoint and Office365 Web Parts and Apps for on-premise, Azure and the Cloud Private Consultant developing various SharePoint and Office365 Web Parts and Apps for on-premise and the Cloud, Azure

How To : Create a Re-Usable News Page Layout using Content Type in SharePoint 2013

Introduction

In a recent project I was asked to consult in, the team needed to create sub site/s for news or events.

Developing for re-usability in SharePoint is something I find is lacking quite a bit in Development teams.

Below I outline the solution I worked out for the project, that is also now a template that the Team can use in any similiar project.

I will explain not only how to do it step by step but also continue to make this page layout as the default page layout of a publishing sub site.

After that, make a content query in the root site to preview the news articles.

Finally, I will be using variation to create a similar publishing sub site in other languages.

  1. Step by step creation of News Page Layout using Content Type in SharePoint 2013.
  2. How to Create a publishing sub site for news and using variation to creating the same site to other languages finally making the previous page layout as the default page layout of the sub site.

Firstly

  1. Open Visual Studio 2013 and a create new project of type SharePoint Solutions…”SharePoint 2013 Empty Project”.
    Create new SharePoint 2013 empty project
  2. As we will deploy our solution as a farm solution in our local farm on our local machine.
    Note: Make sure that the site is a publishing site to be able to proceed.

    Deploy the SharePoint site as a farm solution
  3. Our solution will be as the picture blew and we will add three folders for “SiteColumns”, “ContentTypes” and “PageLayouts”.
    SharePoint solution items
  4. Start by adding a new item to “SiteColumns” folder.
    Adding new item to SharePoint solution
  5. After we adding a new site column and rename it, add the following columns as we need to make the news layout NewsTitle, NewsBody, NewsBrief, NewsDate and NewsImage.
    Adding new item of type site column to the solution

    Then add the below fields and you will note that I use Resources in the DisplayName and the Group.

     <Field
     ID="{9fd593c1-75d6-4c23-8ce1-4e5de0d97545}"
     Name="NewsTitle"
     DisplayName="$Resources:SPWorld_News,NewsTitle;"
     Type="Text"
     Required="TRUE"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>
     <Field
     ID="{fcd9f32e-e2e0-4d00-8793-cfd2abf8ef4d}"
     Name="NewsBrief"
     DisplayName="$Resources:SPWorld_News,NewsBrief;"
     Type="Note"
     Required="FALSE"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>
     <Field
     ID="{FF268335-35E7-4306-B60F-E3666E5DDC07}"
     Name="NewsBody"
     DisplayName="$Resources:SPWorld_News,NewsBody;"
     Type="HTML"
     Required="TRUE"
     RichText="TRUE"
     RichTextMode="FullHtml"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>
     <Field
     ID="{FCA0BBA0-870C-4D42-A34A-41A69749F963}"
     Name="NewsDate"
     DisplayName="$Resources:SPWorld_News,NewsDate;"
     Type="DateTime"
     Required="TRUE"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>
     <Field
     ID="{8218A8D9-912C-47E7-AAD2-12AA10B42BE3}"
     Name="NewsImage"
     DisplayName="$Resources:SPWorld_News,NewsImage;"
     Required="FALSE"
     Type="Image"
     RichText="TRUE"
     RichTextMode="ThemeHtml"
     Group="$Resources:SPWorld_News,NewsGroup;">
     </Field>

    After That

  6. Create Content Type, we will be adding new Content Type to the folder ContentTypes.
    Adding new item of type Content Type to SharePoint solution
  7. We must make sure to select the base of the content type “Page”.
    Specifying the base type of the content type
  8. Open the content type and add our new columns to it.
    Adding columns to the content type
  9. Open the elements file of the content type and make sure it will look like this code below.Note: We use Resources in the Name, Description and the group of the content type.
    <!-- Parent ContentType:
    Page (0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39) -->
     <ContentType
     ID="0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39007A5224C9C2804A46B028C4F78283A2CB"
     Name="$Resources:SPWorld_News,NewsContentType;"
     Group="$Resources:SPWorld_News,NewsGroup;"
     Description="$Resources:SPWorld_News,NewsContentTypeDesc;"
     Inherits="TRUE" Version="0">
     <FieldRefs>
     <FieldRef ID="{9fd593c1-75d6-4c23-8ce1-4e5de0d97545}"
     DisplayName="$Resources:SPWorld_News,NewsTitle;" Required="TRUE" Name="NewsTitle" />
     <FieldRef ID="{fcd9f32e-e2e0-4d00-8793-cfd2abf8ef4d}"
     DisplayName="$Resources:SPWorld_News,NewsBrief;" Required="FALSE" Name="NewsBrief" />
     <FieldRef ID="{FF268335-35E7-4306-B60F-E3666E5DDC07}"
     DisplayName="$Resources:SPWorld_News,NewsBody;" Required="TRUE" Name="NewsBody" />
     <FieldRef ID="{FCA0BBA0-870C-4D42-A34A-41A69749F963}"
     DisplayName="$Resources:SPWorld_News,NewsDate;" Required="TRUE" Name="NewsDate" />
     <FieldRef ID="{8218A8D9-912C-47E7-AAD2-12AA10B42BE3}"
     DisplayName="$Resources:SPWorld_News,NewsImage;" Required="FALSE" Name="NewsImage" />
     </FieldRefs>
     </ContentType>
  10. Add new Module to the PageLayouts folder. After that, we will find sample.txt file, then rename it “NewsPageLayout.aspx”.
    Adding new module to SharePoint solution.
  11. Add the code below to this “NewsPageLayout.aspx”.
     <%@ Page language="C#" Inherits="Microsoft.SharePoint.Publishing.PublishingLayoutPage,
    Microsoft.SharePoint.Publishing,Version=15.0.0.0,Culture=neutral,PublicKeyToken=71e9bce111e9429c" %>
     <%@ Register Tagprefix="SharePointWebControls" Namespace="Microsoft.SharePoint.WebControls"
     Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
     <%@ Register Tagprefix="WebPartPages" Namespace="Microsoft.SharePoint.WebPartPages"
     Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
     <%@ Register Tagprefix="PublishingWebControls" Namespace="Microsoft.SharePoint.Publishing.WebControls"
     Assembly="Microsoft.SharePoint.Publishing, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
     <%@ Register Tagprefix="PublishingNavigation" Namespace="Microsoft.SharePoint.Publishing.Navigation"
     Assembly="Microsoft.SharePoint.Publishing, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    
     <asp:Content ContentPlaceholderID="PlaceHolderPageTitle" runat="server">
     <SharePointWebControls:FieldValue id="FieldValue1" FieldName="Title" runat="server"/>
     </asp:Content>
     <asp:Content ContentPlaceholderID="PlaceHolderMain" runat="server">
    
     <H1><SharePointWebControls:TextField ID="NewsTitle"
     FieldName="9fd593c1-75d6-4c23-8ce1-4e5de0d97545" runat="server">
     </SharePointWebControls:TextField></H1>
     <p><PublishingWebControls:RichHtmlField ID="NewsBody"
     FieldName="FF268335-35E7-4306-B60F-E3666E5DDC07" runat="server">
     </PublishingWebControls:RichHtmlField></p>
     <p><SharePointWebControls:NoteField ID="NewsBrief"
     FieldName="fcd9f32e-e2e0-4d00-8793-cfd2abf8ef4d" runat="server">
     </SharePointWebControls:NoteField></p>
     <p><SharePointWebControls:DateTimeField ID="NewsDate"
     FieldName="FCA0BBA0-870C-4D42-A34A-41A69749F963" runat="server">
     </SharePointWebControls:DateTimeField></p>
     <p><PublishingWebControls:RichImageField ID="NewsImage"
     FieldName="8218A8D9-912C-47E7-AAD2-12AA10B42BE3" runat="server">
     </PublishingWebControls:RichImageField></p>
    
     </asp:Content>
  12. Add the following code to the elements file of the “NewsPageLayouts” module.
     <Module Name="NewsPageLayout" 
    Url="_catalogs/masterpage" List="116" >
     <File Path="NewsPageLayout\NewsPageLayout.aspx" Url="NewsPageLayout.aspx"
     Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 
     ReplaceContent="TRUE" Level="Published" >
     <Property Name="Title" Value="$Resources:SPWorld_News,NewsPageLayout;" />
     <Property Name="MasterPageDescription" Value="$Resources:SPWorld_News,NewsPageLayout;" />
     <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" />
     <Property Name="PublishingPreviewImage"
     Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;
     /Preview Images/WelcomeSplash.png, ~SiteCollection/_catalogs/masterpage/$Resources:
     core,Culture;/Preview Images/WelcomeSplash.png" />
     <Property Name="PublishingAssociatedContentType"
     Value=";#$Resources:SPWorld_News,NewsContentType;;
     #0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39007A5224C9C2804A46B028C4F78283A2CB;#">
     </Property>
     </File>
     </Module>
  13. Don’t forget to add the Resources folder, then add the resource file with the name “SPWorld_News.resx” as we used it in the previous steps and add the below keys to it.
    News                     News
    NewsBody                 News Body
    NewsBrief                News Brief
    NewsContentType          News Content Type
    NewsContentTypeDesc      News Content Type Desc.
    NewsDate                 News Date
    NewsGroup                News
    NewsImage                News Image
    NewsPageLayout           News Page Layout
    NewsTitle                News Title
  14. Finally, deploy the solution.
  15. The next steps will explain how we add the “news content type” to the page layout through SharePoint wizard. We will do these steps pragmatically in the next article.Note: We will do the steps from “A” to “D” pragmatically in the next article without the need to do it manually from SharePoint.

    1. Go to Site Contents then Pages , Library, Library SettingsOpening library setting of the page library
    2. Add the news content type to the page layout.
      Adding existing content type to the pages library
    3. Then
      Selecting the content type to add it to pages library
    4. Go to Pages Library, Files, New Document, select News Content Type.
      Adding new document of the news content type to pages library
    5. Write the page title.
      Creating new page of news content type to pages library.
    6. Open the page to edit it. Pages library contains new page of news content type.
    7. Now we can see the page Layout after we add the title, Body, Brief, date and image. Finally click Save the news.

BCS connector for Exchange private mailbox SharePoint and FAST search

SharePoint 2010 BCS Mailbox connector for Microsoft Exchange empowers you search private mailboxes via SharePoint and FAST Search.
SharePoint 2010 BCS Mailbox connector for Microsoft Exchange  allow you:

  • Index all mailboxes, emails and attachments
  • Enable super users from AD group search against all mailboxes
  • Preview Exchange emails and attachments directly from search user interface via SharePoint Business Connectivity services.

    Microsoft provides Exchange OOB connector for SharePoint 2010 search and FAST Search for SharePoint.
    Unfortunately this connector limited to Exchange public folder only.

    Please, make sure you download following dependencies:

New Azure To TFS Deploy Tool Available

Deploy Windows Azure project directly from TFS 2010 Build Server

 

Build Definition Template
DeployToAzure allows automating deployment of Windows Azure project and making it a part of TFS 2010 build process without using PowerShell and Azure Management CmdLets.

Solution includes:

  • A set of custom workflow actions wrapping Azure Management API operations such as GetDeployment, GetOperationStatus, NewDeployment, RemoveDeployment and SetDeploymentStatus;
  • Helper actions such as FindPackageAndConfigurationFiles, LoadCertificate and WaitForOperationToComplete;
  • Designer activity DeployToAzure implementing deployment logic ;
  • Reusable build definition template.

How it works :

Create build definition

Open New Build Definition dialog. Select Process tab. Click New button in Build process template section. Choose Select an existing XAML file option and specify path to DefaultTemplateWithDeploymentToAzure.xaml in your source control.

processtemplate

Deployment to Azure section will appear in Build process parameters. Click Refresh button if you don’t see it.

d2asection

Now define build properties. First ,open 1. Required / Items to build dialog and select your solution and specify configuration to build.

Open Deployment to Azure section and provide following parameters:

  • API Certificate store location – store location of your management certificate. Select LocalMachine if certificate was created by command above.
  • API Certificate Thumbprint – thumbprint of management certificate.
  • API Certificate store – store where management certificate is located. Select Root if certificate was created by command above.
  • Cloud Project – cloud project to be packaged and deployed.  It will be built with the same configuration as the one specified for solution building.
  • Deployment label – label of deployment. Label can contain same set of macros as Build Label.
  • Hosted Service Name – DNS Prefix of Hosted service. You can find it on Windows Azure portal.
  • Service configuration – service configuration to be used for deployment, for example Cloud. Keep this field empty to use default configuration.
  • Slot – select Staging or Production.
  • Storage Service Name – DNS Prefix of storage service which will be used to upload deployment package.
  • Subscription Id – Azure subscription ID.
  • Wait for roles to start – set to true if build should wait for all instances to start.
  • Initialization Timeout – if above is true, specify timeout for build to wait before generate timeout exception.

Contact me at floyd_tomas@yahoo.com for this and other Azure, SharePoint, Office365,  TFS and Agile Tools

hero-for-hire_basic-layout_600micorosftazurelogo[1]

How To : Plan the Deployment of Farm Solutions for SharePoint 2013

SharePoint 2013

While everyone is talking about Apps, there are still significant investments in Full Trust Solutions (a.k.a. Farm Solutions) and I am sure that many OnPrem deployments will want to carry these forward when upgrading to SharePoint 2013.  The new SharePoint 2013 upgrade model allows Sites to continue to run in 2010 mode after upgrading and each Site Collection explicitly has to be upgraded individually.

Not the way it worked in 2010 with Visual Upgrade, but this time there is actually both a 14 and 15 Root folder deployed and all the Features and Layout files from SharePoint 2010 are deployed as part of the 2013 installation.

For those of you new to SharePoint, the root folder is where SharePoint keeps most of its application files and the default location for this is “C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\[SharePoint Internal Version]”, where the versions for the last releases have been 60 (6.0), 12, 14, and now 15. The location is also known as “The xx hive.

This is great in an upgrade scenario, where you may want to do a platform upgrade first or only want to share the new features of 2013 with a few users while maintaining an unchanged experience for the rest of the organization.  This also gives us the opportunity to have different functionality and features for sites running in 2010 and 2013 mode.  However, this requires some extra thought in the development and deployment process that I will give an introduction to here.

Because you can now have Sites running in both 2010 and 2013 mode, SharePoint 2013 introduces a new concept of a Compatibility Level.  Right now it can only be 14 or 15, but you can imagine that there is room for growth.  This Compatibility Level is available at Site Collection and Site (web) level and can be used in code constructs and PowerShell commands.  I will start by explaining how you use it while building and deploying wsp-files for SharePoint 2013 and then finish off with a few things to watch out for and some code tips.

Deployment Considerations

If you take your wsp-files from SharePoint 2010 and just deploy these with Add-SPSolution -> Install-SPSolution as you did in 2010, then SharePoint will assume it is a 2010 solution or a “14” mode solution.  If the level is not specified in the PowerShell command, it determines the level based on the value of the SharePointProductVersion attribute in the Solution manifest file of the wsp-package.  The value can currently be 15.0 or 14.0. If this attribute is missing, it will assume 14.0 (SharePoint 2010) and since this attribute did not exist in 2010, only very well informed people will have this included in existing packages.

For PowerShell cmdlets related to installing solutions and features, there is a new parameter called CompatibilityLevel. This can override the settings of the package itself and can assume the following values: 14, 15, New, Old, All and “14,15” (the latter currently also means All).

The parameter is available for Install-SPSolution, Uninstall-SPSolution, Install-SPFeature and Uninstall-SPFeature.  There is no way to specify “All” versions in the package itself – only the intended target – and therefore these parameters need to be specified if you want to deploy to both targets.

It is important to note that Compatibility Level impacts only files deployed to the Templates folder in the 14/15 Root folder. That is:  Features, Layouts-files, Images, ControlTemplates, etc.

This means that files outside of this folder (e.g. a WCF Service deployed to the ISAPI folder) will be deployed to the 15/ISAPI no matter what level is set in the manifest or PowerShell.  Files such as Assemblies in GAC/Bin and certain resource files will also be deployed to the same location regardless of the Compatibility Level.

It is possible to install the same solution in both 14 and 15 mode, but only if it is done in the same command – specifying Compatibility Level as either “All” or “14,15”.  If it is first deployed with 14 and then with 15, it will throw an exception.  It can be installed with the –Force parameter, but this is not recommended as it could hide other errors and lead to an unknown state for the system.

The following three diagrams illustrate where files go depending on parameters and attributes set (click on the individual images for a larger view). Thanks to the Ignite Team for creating these. I did some small changes from the originals to emphasize a few points.

CompatibilityLevelOld

CompatibilityLevelNew

CompatibilityLevelAll

When retracting the solutions, there is also an option to specify Compatibility Level.  If you do not specify this, it will retract all – both 14 and 15 files if installed.  When deployed to both levels, you can retract one, but the really important thing to understand here is that it will not only retract the files from the version folder, but also all version neutral files – such as Assemblies, ISAPI deployed files, etc. – leaving only the files from the Root folder you did not retract.

To plan for this, my suggestion would be the following during development/deployment:

  • If you want to only run sites in 2013 mode, then deploy the Solutions with CompatibilityLevel 15 or SharePointProductVersion 15.0.
  • If you want to run with both 2010 and 2013 mode, and want to share features and layout files, then deploy to both (All or “14,15”).
  • If you want to differentiate the files and features that are used in 2010 and 2013 mode, then the solutions should be split into two or three solutions:
    • One solution (“Xxx – SP2010”), which contains the files and features to be deployed to the 14 folder for 2010 mode.  including code-behind (for things like feature activation and Application pages), but excluding shared assemblies and files.
    • One solution (“Xxx – SP2013”), which contains the files and features to be deployed to the 15 folder for 2013 mode, including code-behind (for things like feature activation and Application pages), but excluding shared assemblies and files.
    • One solution (“Xxx – Common”), which contains shared files (e.g. common assemblies or web services). This solution would also include all WebApplication scoped features such as bin-deployed assemblies and assemblies with SafeControl entries.
  • If you only want to have two solutions for various reasons, the Common solution can be joined with the SP2013 solution as this is likely to be the one you will keep the longest.
  • The assemblies being used as code-files for the artifacts in SP2010 and SP2013 need to have different names or at least different versions to differentiate them. Web Parts need to go in the Common package and should be shared across the versions, however the installed Web Part templates can be unique to the version mode.

Things to watch out for…

There are a few issues that are worth being aware of that may be fixed in future updates, but you’ll need to watch out for these currently.  I’ve come across an issue where installing the same solution in both levels can go wrong.  If you install it with level All and then uninstall it with level 14 two times, the deployment logic will think that it completely removed the solution, but the files in the 15/Templates folder will still be there.

To recover from this, you can install it with –Force in the orphan level and then uninstall it.  Again, it is better to not get in this situation.

Another scenario that can get you in trouble is if you install a solution in one Compatibility Level (either through PowerShell Parameter or manifest file attribute) and then uninstall with the other level.  It will then remove the common files but leave the specific 14 or 15 folder files and display the solution as fully retracted.

Unfortunately there is no public API to query which Compatibility Levels a package is deployed to.  So you need to get it right the first time or as quickly as possible move to native 2013 mode and packages (this is where we all want to be anyway).

Code patterns

An additional tip is to look for hard coded paths in you custom code such as _layouts and _controltemplates.  The SPUtility class has been updated with static methods to help you parse the current location based on the upgrade status of the Site.   For example, SPUtility.ContextLayoutsFolder will give you the path to the correct layouts folder.  See the reference article on SPUtility properties for more examples.

Round up

I hope this gave you an insight into some of the things you need to consider when deploying Farm Solutions for SharePoint 2013. There are lots of scenarios that are not covered here. If you find some, please share these or share your concerns and I will try to add it as comments or an additional post.

How To : Implement Business Data Connectivity in SharePoint 2013

Business Data Connectivity

Business Connectivity Services is a centralized infrastructure in SharePoint 2013 and Office 2013 that supports integrated data solutions. With Business Connectivity Services, you can use SharePoint 2013 and Office 2013 clients as interfaces into data that doesn’t live in SharePoint 2013 itself. For example, this external data may be in a database and it is accessed by using the out-of-the-box Business Connectivity Services connector for that database.

DuetEnterpriseDesign[1]

Business Connectivity Services can also connect to data that is available through a web service, or data that is published as an OData source or many other types of external data. Business Connectivity Services does this through out-of-the box or custom connectors.

External Content Types in BCS

External content types are the core of BCS. They enable you to manage and reuse the metadata and behaviors of a business entity, such as Customer or Order, from a central location. They enable users to interact with that external data and process it in a more meaningful way.

For more information about using external content types in BCS, see External content types in SharePoint 2013.

How to Connect With SQL External Data Source

Open the SharePoint Designer 2013 and click on the open site icon:

Input the site URL which we need to open:

Enter your site credentials here:

Now we need to create the new external content type and here we have the options for changing the name of the content type and creating the connection for external data source:

And click on the hyperlink text “Click here to discover the external data source operations, now this window will open:

Click on the “Add Connection “button, we can create a new connection. Here we have the different options to select .NET Type, SQL Server, WCF Service.

Here we selected SQL server, now we need to provide the Server credentials:

Now, we can see all the tables and views from the database.

In this screen, we have the options for creating different types of operations against the database:

Click on the next button:

Parameters Configurations:

Options for Filter parameters Configuration:

Here we need to add new External List, Click on the “External List”:

Select the Site here and click ok button:

Enter the list name here and click ok button:

After that, refresh the SharePoint site, we can see the external list here and click on the list:

Here we have the error message “Access denied by Business Connectivity.”

Solution for this Error

SharePoint central admin, click on the Manage service application:

Click on the Business Data Connectivity Service:

Set the permission for this list:

Click ok after setting the permissions:

After that, refresh the site and hope this will work… but again, it has a problem. The error message like Login failed for user “NT AUTHORITY\ANONYMOUS LOGON”.

Solution for this Error

We need to edit the connection properties, the Authentication mode selects the value ‘BDC Identity’.

Then follow the below mentioned steps.

Open PowerShell and type the following lines:

$bdc = Get-SPServiceApplication | 
where {$_ -match “Business Data Connectivity Service”}
$bdc.RevertToSelfAllowed = $true
$bdc.Update();

Now it’s working fine.

And there is a chance for one more error like:

Database Connector has throttled the response.
The response from database contains more than '2000' rows. 
The maximum number of rows that can be read through Database Connector is '2000'. 
The limit can be changed via the 'Set-SPBusinessDataCatalogThrottleConfig' cmdlet

It’s because it depends on the number of recodes that exist in the table.

Solution for this Error

Follow the below steps:

Open PowerShell and type the following lines and execute:

$bcs = Get-SPServiceApplicationProxy | where{$_.GetType().FullName 
-eq (‘Microsoft.SharePoint.BusinessData.SharedService.’ + ‘BdcServiceApplicationProxy’)}
$BCSThrottle = Get-SPBusinessDataCatalogThrottleConfig -Scope database 
-ThrottleType items -ServiceApplicationProxy $bcs
Set-SPBusinessDataCatalogThrottleConfig -Identity $BCSThrottle -Maximum 1000000 -Default 20000

How To : Access SAP Business Data From Silverlight 4 Clients Using WCF RIA Services And LINQ to SAP

Introduction

The introduction of Microsoft’s WCF RIA Services for Silverlight 4 simplified very much the development process of N-tier business applications using Silverlight and ASP.NET. By using this new technology, we can also easily access and integrate SAP business data in Silverlight clients.

This article shows how to provide a SAP domain service as web service that will be consumed by a Silverlight client. The sample application will allow the user to query customer data. The service uses LINQ to SAP from Theobald Software to connect to a SAP R/3 system.

Project Setup

The first step in setting up a new Silverlight 4 project with WCF RIA Services is to create a solution using the Visual Studio template Silverlight Navigation Application:

Screenshot-01.png - Click to enlarge imageVisual Studio 2010 then asks you to create an additional web application, which hosts the Silverlight application. It’s important to select the checkbox Enable WCF RIA Services (see screenshot below):

SAP2Silverlight/Screenshot-02.pngAfter clicking the Ok button, Visual Studio generates a solution with two projects, one Silverlight 4 project and one ASP.NET project. In the next section, we will create the SAP data access layer using the LINQ to SAP designer.

LINQ to SAP

The LINQ to SAP provider and its Visual Studio 2010 designer offers a very handy way to design SAP interfaces visually. The designer will generate the code for the SAP data access layer automatically, similar to LINQ to SQL. The LINQ provider is part of the .NET library ERPConnect.net from Theobald Software. The company offers a demo version for download on its homepage.

The next step is to create the needed LINQ to SAP file by opening the Add New Item dialog:

Screenshot-03.png - Click to enlarge imageLINQ to SAP is internally called LINQ to ERP.

Clicking the Add button will create a new ERP file and opens the LINQ designer. Now, drag the Function object from the toolbox and drop it onto the designer surface. If you have not entered the SAP connection data so far, you are now asked to do so:

Screenshot-04.png - Click to enlarge imageEnter the connection data for your SAP R/3 system and then click the Ok button. Next, search for and select the SAP function module named SD_RFC_CUSTOMER_GET. The function module provides a list of customer data.

The RFC Function modules dialog opens and lets you define the necessary parameters:

SAP2Silverlight/Screenshot-05.pngIn the above function dialog, change the method name to GetCustomers and mark the Pass checkbox for theNAME1 parameter in the Exports tab. Also set the variable name to namePattern. On the Tables tab, mark the Return checkbox for the table parameter CUSTOMER_T and set the table and structure name to CustomerTable andCustomerRow:

SAP2Silverlight/Screenshot-06.pngAfter clicking the Ok button and saving the ERP file, the LINQ designer will generate a SAPContext class which contains a method called GetCustomers with an input parameter named namePattern. This method executes a search for SAP customer data allowing the user to enter a wildcard pattern. The method returns a table of customer data:

SAP2Silverlight/Screenshot-07.pngOn the LINQ designer level (click on the free part of the LINQ designer surface) property, Create Object Outside Of Context Class must be set to True:

Screenshot-08.png - Click to enlarge imageNow, we finally add a Customer class which we use in our SAP domain service later on. This class and its values will be transmitted to the Silverlight client by the WCF RIA Services. It’s important to set the Key attribute on the identifier fields for WCF RIA Services, otherwise the project will not compile:

Screenshot-09.png - Click to enlarge imageThat’s it! We now have our SAP data access layer ready to use and can start adding the domain service in the next section.

SAP Domain Service

The next step is to add the SAP domain service to our web project. A domain service is a specialized WCF service and is one of the core constructs of WCF RIA Services. The service exposes operations that can be called from the client generated code. On the client side, we use the domain context to access the domain service on the server side.

Add a new Domain Service Class and name it SAPService:

Screenshot-10.png - Click to enlarge imageIn the upcoming dialog, create an empty domain service class by just clicking the Ok button:

SAP2Silverlight/Screenshot-11.pngNext, we add the service operation GetCustomers to the SAP service with a name pattern parameter. The operation then returns a list of Customer objects. The Query attribute limits the result set to 200 entries.

The operation uses the visually designed SAP data access logic to retrieve the SAP customer data. First of all, an instance of the SAPContext class will be created using a connection string (see sample in code). For more details regarding the SAP connection string, see the ERPConnect.net manual.

The LINQ to SAP context class contains the GetCustomers method which we will call using the given namePatternparameter. Next, the operation creates an instance of the Customer class for each customer record returned by SAP.

The license code for the ERPConnect.net library is set in the constructor of our domain service class.

Screenshot-12.png - Click to enlarge imageThat’s all we need on the server side.

In the next section, we will implement the Silverlight client.

Silverlight Client

The implementation of the client side is straightforward. The home view contains a DataGrid control to display the list of customer data as well as a search area with TextBox and Button controls to allow users to enter name search pattern.

The click event handler of the load button, called OnLoadButtonClick, will execute the SAP service. The boilerplate code to access the web service was generated by WCF RIA Services in the subfolder Generated_Code in the Silverlight project.

First of all, an instance of the SAPContext will be created. Then, we load the query GetCustomersQuery and execute the service operation on the server side using WCF RIA Services. If the domain service returns an error, the callback anonymous method will mark the error as handled and display the error message.

If the execution of the service operation succeeded, the result set gets displayed in the DataGrid control.

Screenshot-13.png - Click to enlarge imageThe next screenshot shows the final result:

Screenshot-14.png - Click to enlarge imageThat’s it.

Summary

This article has shown how easily SAP customer data can be integrated within Silverlight clients using tools like WCF RIA Services and LINQ to SAP. It is quite simple to extend the SAP service to integrate all kinds of operations.

New TFS Power Tools Available!! TFS Extensions for Source Control and Work Item Tracking

The TFS Productivity Tools project was designed to provide TFS administrators with helper tools when doing Source Control or Work Item Tracking tasks.

For running these extensions the installation of VSTS 2010 Professional or newer is required.

4214.TFS_2D00_LicenseOverview[1]

VisualStudio Extensions

Extension binary/source Description Runs from Visual Studio Window
ChangeLinkTypes.vsix Modify Work Item link types. Team Explorer
DestroyWorkItems.vsix Completely destroy Work Items. Query Results
Export2Word.vsix Export Work Items to Microsoft Word document. Team Explorer
ExtendedMerge.vsix
ExtendedMerge2012.vsix
Provide workaround for several merge features not implemented by TFS 2010/2012:

  1. TFS merge leads to bulk check-in operation that puts files from all previous changesets into one big merge changeset.
  2. TFS allows only for consecutive changesets being cherry-peeked by merge operation.
  3. TFS doesn’t allow choosing changesets for cherry-peek merge by selecting work items.
  4. TFS merge dialog doesn’t have “force” and “baseless” options.
Source Control Explorer,
Query Results
GetPreview.vsix Emulate commandline task and write outputs to Output window:
tf.exe get /recursive /preview “itemspec”
Source Control Explorer
ModifyCheckinDate.vsix 1. Update modification time for checked-out files to their latest check-in time.
2. Directly modify changeset’s check-in time in TFS database.
Source Control Explorer,
History

Custom WorkItem tracking controls

Extension binary/source Description
WITDataGridView.dll
[under development]
Envelope for standard WindowsForms.DataGridView control.
This control serializes table data to XML format and saves it as an WorkItem attachment with default file size limit of 2Mb.

Custom Build workflow activities

Extension binary/source Description
QueueNewBuild.dll
[under development]
Contains activities – QueueNewBuild, QueueNewBuildBegin, QueueNewBuildEnd.
It works like LoadAndInvokeWorkflow but is intended exclusively for build workflows. QueueNewBuildBegin when put inside ParallelSequence can run builds simultanously in separate threads based on free build Agents.

Installation

For installing anyone of the extensions double-click on VSIX file and follow the installer instructions.

image001

ChangeLinkTypes

This extension modifies link types between the Work Items returned from Work Item query.
To execute it select any query under Work Items node in Team Explorer window and click on “Change Query Link Types”

image002

After that, in next dialog choose original and new wanted link types.

image003

DestroyWorkItems

This extension completely destroys (not closes) Work Items returned from Work Item query.
To execute it select a number of Work Items in Query Results window and click on “Destroy Work Items”

image004

Export2Word

This extension exports Work Items returned from Work Item query to Microsoft Word document in paragraph style.
To execute it select any query under Work Items node in Team Explorer window and click on “Export to Microsoft Word”

image005

Exported document sample

ExtendedMerge

Extended Merge extension provides workaround for several merge features not implemented by TFS:

  1. TFS merge leads to bulk check-in operation that puts files from all previous changesets into one big merge changeset.
  2. TFS allows only for consecutive changesets being cherry-peeked by merge operation.
  3. TFS doesn’t allow choosing changesets for cherry-peek merge by selecting work items.
  4. TFS merge dialog doesn’t have “force” and “baseless” options.

Initializing ExtendedMerge extension

The list of merge candidates can be obtained in two ways:

  1. From Source Control Explorer window.
    In this case all history changesets for a specific server path are parsed.
    image006
  2. From Query Results window.
    In this case all changesets, linked to the selected Work Items are parsed.
    image007

After initialization stage extension opens Visual Studio tool window pane with a grid that shows useful information about parsed changesets:

  • Checked/unchecked status.
  • Work item ID.
  • Changeset ID.
  • Changeset check-in date
  • Changeset creator.
  • The change types this changeset contains.
  • Possible merge options
    • None – merge is impossible (don’t mess with “discard” commandline option)
    • Baseless – baseless merge
    • Force – force merge
    • Candidate – regular merge
  • Merge source path (can be modified).
  • Changeset comment field.

image008

The Merge Target Location field contains the path to a folder or a branch in source control.

Merge types are calculated based on shared Merge Target Location path and an individual changeset Source Path.

Running ExtendedMerge extension

Extension runs all actions from toolbar buttons (some of them are duplicated on grid context menu also).

image009

Menu item name Menu item description
Link to Work Items When checking in merge results link new changesets to work items the same way as they were linked in original changesets.
Normal Merge Do regular merge based on merge candidates.
Conservative Merge Use “Conservative” merge option. It produces more merge conflicts.
Refresh Refill changeset merge types.
Merge Do merge.
Resolve Show merge conflicts.
Changeset details Show changeset details.
Work Item details Show Work Item details.
Navigate to Server Path Navigate to server path in Source Control Explorer.
Edit Server Path Edit server paths (one or multiple).
Copy to Clipboard Copy selected changeset details to clipboard.
Mark All Items Check all grid items.
Unmark All Items Uncheck all grid items.

Resolving merge conflicts

During merge operation merge conflicts can occur.
In this case click on OK button and resolve conflicts with standard TFS Resolve Conflicts dialog.

image010

image011

GetPreview

This extension emulates TFS commandline operation: tf.exe get /recursive /preview “itemspec”
To execute it select a path in Source Control Explorer window and click on “Emulate Get-Preview”.

The results are print to Output window.

image012

ModifyCheckinDate

This extension consists from 2 parts:

  1. Updates modification time for checked-out files to their latest check-in time.
    It is accessible from Source Control window & History window.
  2. Directly modifies changeset’s check-in time in TFS database.
    It actually runs SQL statement: UPDATE tbl_Changeset SET CreationDate=’?’ WHERE ChangeSetId=‘?’
    It is accessible from History window.

image013

image014

Release Notes

  • The ExtendedMerge uses a number of internal Microsoft classes. It can possibly lead to some tool malfunction in the next TFS versions:
    Feature Classes
    Show browse server folder dialog Microsoft.TeamFoundation.Build.Controls.VersionControlHelper.ShowServerFolderBrowser
    Show merge progress dialog Microsoft.TeamFoundation.VersionControl.Controls.ProgressMerge
    Show resolve conflicts dialog Microsoft.TeamFoundation.Client.Arguments
    Microsoft.TeamFoundation.VersionControl.CommandLine.CommandResolve
  • The GetPreview uses environment variable: %VS100COMNTOOLS%

How To : SAP Integration with .Net 4.0 (SAP Connection Manager) & SharePoint

This is a simple, C# class library project to connect .NET applications with SAP.

ppt_img[1]

 

This component internally implements SAP .NET Connector 3.0. The SAP .NET Connector is a development environment that enables communication between the Microsoft .NET platform and SAP systems.

This connector supports RFCs and Web services, and allows you to write different applications such as Web form, Windows form, or console applications in the Microsoft Visual Studio .NET.

With the SAP .NET Connector, you can use all common programming languages, such as Visual Basic. NET, C#, or Managed C++.

Features
Using the SAP .NET Connector you can:

Write .NET Windows and Web form applications that have access to SAP business objects (BAPIs).

Develop client applications for the SAP Server.

Write RFC server applications that run in a .NET environment and can be installed starting from the SAP system.

Following are the steps to configure this utility on your project

Download and extract the attached file and place it on your machine. This package contains 3 libraries:

SAPConnectionManager.dll
sapnco.dll
sapnco_utils.dll

Now go to your project and add the reference of all these four libraries. Sapnco.dll and sapnco_utils.dll are inbuilt libraries used by SAP .NET Connector. SAPConnectionManager.dll is the main component which provides the connection between .NET and SAP.

Once the above steps are complete, you need to make certain entries related to SAP server on your configuration file. Here are the sample entries that you have to maintain on your own project. You need to change only the values which are marked in Bold. Rest remains unchanged.

<appSettings>
<add key=”ServerHost” value=”127.0.0.1″/>
<add key=”SystemNumber” value=”00″/>
<add key=”User” value=”sample”/>
<add key=”Password” value=”pass”/>
<add key=”Client” value=”50″/>
<add key=”Language” value=”EN”/>
<add key=”PoolSize” value=”5″/>
<add key=”PeakConnectionsLimit” value=”10″/>
<add key=”IdleTimeout” value=”600″/>
</appSettings>

To test this component, create one windows application. Add the reference of sapnco.dll, sapnco_utils.dll, andSAPConnectionManager.dll on your project.

Paste the below code on your Form lode event

SAPSystemConnect sapCfg = new SAPSystemConnect();
RfcDestinationManager.RegisterDestinationConfiguration(sapCfg);
RfcDestination rfcDest = null;
rfcDest = RfcDestinationManager.GetDestination(“Dev”);

sap_integration_en_round[1]
That’s it. Now you are successfully connected with your SAP Server. Next you need to call SAP business objects (BAPIs) and extract the data and stored it in DataSet or list.

Demo Code available on request!!

NEW MS TEST MANAGER TOOL AVAILABLE!!

MTM_2010

Copy and Cloning Test Suites and Tests  OVER Team Collections

MTM Copy  can copy and cloning Test Suites and Tests over Team Projects and Team Collections.

Since 2012 Microsoft Introduce Clone feature from Microsoft Test Manager 2012, but you can only clone in the SAME Team Project ONLY . http://msdn.microsoft.com/en-us/library/hh543843.aspx

  1. Empty Team Project

1.png

  1. Open MTM Copy Tool, specify the Source and Target Team Projects, can be on Different Collections.
  2. On the mapping panel select the desire Test Plans, Test Suites and Test Cases you wish to Clone.

2.png

  1. Create Test Plan on Target Project, or select an existing Test Plan.

3.png
5.png

  1. Click “Start Migration”, and you’re done!

6.png

  1. Each Test Case that were copy is saved in Completed Items Mapping, this will prevent copy that save Test Case twice to prevent duplication’s.

 

How to: Create a provider-hosted app for SharePoint to access SAP data via SAP Gateway for Microsoft

You can create an app for SharePoint that reads and writes SAP data, and optionally reads and writes SharePoint data, by using SAP Gateway for Microsoft and the Azure AD Authentication Library for .NET. This article provides the procedures for how you can design the app for SharePoint to get authorized access to SAP.

hero-for-hire_basic-layout_600sap_integration_en_round[1]


The following are prerequisites to the procedures in this article:

sap_integration_en_round[2]

Code sample: SharePoint 2013: Using the SAP Gateway to Microsoft in an app for SharePoint

OAuth 2.0 in Azure AD enables applications to access multiple resources hosted by Microsoft Azure and SAP Gateway for Microsoft is one of them. With OAuth 2.0, applications, in addition to users, are security principals. Application principals require authentication and authorization to protected resources in addition to (and sometimes instead of) users. The process involves an OAuth “flow” in which the application, which can be an app for SharePoint, obtains an access token (and refresh token) that is accepted by all of the Microsoft Azure-hosted services and applications that are configured to use Azure AD as an OAuth 2.0 authorization server. The process is very similar to the way that the remote components of a provider-hosted app for SharePoint gets authorization to SharePoint as described in Creating apps for SharePoint that use low-trust authorization and its child articles. However, the ACS authorization system uses Microsoft Azure Access Control Service (ACS) as the trusted token issuer rather than Azure AD.

Tip Tip
If your app for SharePoint accesses SharePoint in addition to accessing SAP Gateway for Microsoft, then it will need to use both systems: Azure AD to get an access token to SAP Gateway for Microsoft and the ACS authorization system to get an access token to SharePoint. The tokens from the two sources are not interchangeable. For more information, see Optionally, add SharePoint access to the ASP.NET application.

For a detailed description and diagram of the OAuth flow used by OAuth 2.0 in Azure AD, see Authorization Code Grant Flow. (For a similar description, and a diagram, of the flow for accessing SharePoint, see See the steps in the Context Token flow.)

Create the Visual Studio solution

  1. Create an App for SharePoint project in Visual Studio with the following steps. (The continuing example in this article assumes you are using C#; but you can start an app for SharePoint project in the Visual Basic section of the new project templates as well.)
    1. In the New app for SharePoint wizard, name the project and click OK. For the continuing example, use SAP2SharePoint.
    2. Specify the domain URL of your Office 365 Developer Site (including a final forward slash) as the debugging site; for example, https://<O365_domain&gt;.sharepoint.com/. Specify Provider-hosted as the app type. Click Next.
    3. Choose a project type. For the continuing example, choose ASP.NET Web Forms Application. (You can also make ASP.NET MVC applications that access SAP Gateway for Microsoft.) Click Next.
    4. Choose Azure ACS as the authentication system. (Your app for SharePoint will use this system if it accesses SharePoint. It does not use this system when it accesses SAP Gateway for Microsoft.) Click Finish.
  2. After the project is created, you are prompted to login to the Office 365 account. Use the credentials of an account administrator; for example Bob@<O365_domain>.onmicrosoft.com.
  3. There are two projects in the Visual Studio solution; the app for SharePoint proper project and an ASP.NET web forms project. Add the Active Directory Authentication Library (ADAL) package to the ASP.NET project with these steps:
    1. Right-click the References folder in the ASP.NET project (named SAP2SharePointWeb in the continuing example) and select Manage NuGet Packages.
    2. In the dialog that opens, select Online on the left. Enter Microsoft.IdentityModel.Clients.ActiveDirectory in the search box.
    3. When the ADAL library appears in the search results, click the Install button beside it, and accept the license when prompted.
  4. Add the Json.net package to the ASP.NET project with these steps:
    1. Enter Json.net in the search box. If this produces too many hits, try searching on Newtonsoft.json.
    2. When Json.net appears in the search results, click the Install button beside it.
  5. Click Close.

Register your web application with Azure AD

  1. Login into the Azure Management portal with your Azure administrator account.
    Note Note
    For security purposes, we recommend against using an administrator account when developing apps.
  2. Choose Active Directory on the left side.
  3. Click on your directory.
  4. Choose APPLICATIONS (on the top navigation bar).
  5. Choose Add on the toolbar at the bottom of the screen.
  6. On the dialog that opens, choose Add an application my organization is developing.
  7. On the ADD APPLICATION dialog, give the application a name. For the continuing example, use ContosoAutomobileCollection.
  8. Choose Web Application And/Or Web API as the application type, and then click the right arrow button.
  9. On the second page of the dialog, use the SSL debugging URL from the ASP.NET project in the Visual Studio solution as the SIGN-ON URL. You can find the URL using the following steps. (You need to register the app initially with the debugging URL so that you can run the Visual Studio debugger (F5). When your app is ready for staging, you will re-register it with its staging Azure Web Site URL. Modify the app and stage it to Azure and Office 365.)
    1. Highlight the ASP.NET project in Solution Explorer.
    2. In the Properties window, copy the value of the SSL URL property. An example is https://localhost:44300/.
    3. Paste it into the SIGN-ON URL on the ADD APPLICATION dialog.
  10. For the APP ID URI, give the application a unique URI, such as the application name appended to the end of the SSL URL; for example https://localhost:44300/ContosoAutomobileCollection.
  11. Click the checkmark button. The Azure dashboard for the application opens with a success message.
  12. Choose CONFIGURE on the top of the page.
  13. Scroll to the CLIENT ID and make a copy of it. You will need it for a later procedure.
  14. In the keys section, create a key. It won’t appear initially. Click SAVE at the bottom of the page and the key will be visible. Make a copy of it. You will need it for a later procedure.
  15. Scroll to permissions to other applications and select your SAP Gateway for Microsoft service application.
  16. Open the Delegated Permissions drop down list and enable the boxes for the permissions to the SAP Gateway for Microsoft service that your app for SharePoint will need.
  17. Click SAVE at the bottom of the screen.

Configure the application to communicate with Azure AD

  1. In Visual Studio, open the web.config file in the ASP.NET project.
  2. In the <appSettings> section, the Office Developer Tools for Visual Studio have added elements for the ClientID and ClientSecret of the app for SharePoint. (These are used in the Azure ACS authorization system if the ASP.NET application accesses SharePoint. You can ignore them for the continuing example, but do not delete them. They are required in provider-hosted apps for SharePoint even if the app is not accessing SharePoint data. Their values will change each time you press F5 in Visual Studio.) Add the following two elements to the section. These are used by the application to authenticate to Azure AD. (Remember that applications, as well as users, are security principals in OAuth-based authentication and authorization systems.)
    <add key="ida:ClientID" value="" />
    <add key="ida:ClientKey" value="" />
    
  3. Insert the client ID that you saved from your Azure AD directory in the earlier procedure as the value of the ida:ClientID key. Leave the casing and punctuation exactly as you copied it and be careful not to include a space character at the beginning or end of the value. For the ida:ClientKey key use the key that you saved from the directory. Again, be careful not to introduce any space characters or change the value in any way. The <appSettings> section should now look something like the following. (The ClientId value may have a GUID or an empty string.)
    <appSettings>
      <add key="ClientId" value="" />
      <add key="ClientSecret" value="LypZu2yVajlHfPLRn5J2hBrwCk5aBOHxE4PtKCjIQkk=" />
      <add key="ida:ClientID" value="4da99afe-08b5-4bce-bc66-5356482ec2df" />
      <add key="ida:ClientKey" value="URwh/oiPay/b5jJWYHgkVdoE/x7gq3zZdtcl/cG14ss=" />
    </appSettings>
    
    NoteNote
    Your application is known to Azure AD by the “localhost” URL you used to register it. The client ID and client key are associated with that identity. When you are ready to stage your application to an Azure Web Site, you will re-register it with a new URL.
  4. Still in the appSettings section, add an Authority key and set its value to the Office 365 domain (some_domain.onmicrosoft.com) of your organizational account. In the continuing example, the organizational account is Bob@<O365_domain>.onmicrosoft.com, so the authority is <O365_domain>.onmicrosoft.com.
    <add key="Authority" value="<O365_domain>.onmicrosoft.com" />
    
  5. Still in the appSettings section, add an AppRedirectUrl key and set its value to the page that the user’s browser should be redirected to after the ASP.NET app has obtained an authorization code from Azure AD. Usually, this is the same page that the user was on when the call to Azure AD was made. In the continuing example, use the SSL URL value with “/Pages/Default.aspx” appended to it as shown below. (This is another value that you will change for staging.)
    Copy
    <add key="AppRedirectUrl" value="https://localhost:44322/Pages/Default.aspx" />
    
  6. Still in the appSettings section, add a ResourceUrl key and set its value to the APP ID URI of SAP Gateway for Microsoft (not the APP ID URI of your ASP.NET application). Obtain this value from the SAP Gateway for Microsoft administrator. The following is an example.
    <add key="ResourceUrl" value="http://<SAP_gateway_domain>.cloudapp.net/" />
    

    The <appSettings> section should now look something like this:

    <appSettings>
      <add key="ClientId" value="06af1059-8916-4851-a271-2705e8cf53c6" />
      <add key="ClientSecret" value="LypZu2yVajlHfPLRn5J2hBrwCk5aBOHxE4PtKCjIQkk=" />
      <add key="ida:ClientID" value="4da99afe-08b5-4bce-bc66-5356482ec2df" />
      <add key="ida:ClientKey" value="URwh/oiPay/b5jJWYHgkVdoE/x7gq3zZdtcl/cG14ss=" />
      <add key="Authority" value="<O365_domain>.onmicrosoft.com" />
      <add key="AppRedirectUrl" value="https://localhost:44322/Pages/Default.aspx" />
      <add key="ResourceUrl" value="http://<SAP_gateway_domain>.cloudapp.net/" />
    </appSettings>
    
  7. Save and close the web.config file.
    Tip Tip
    Do not leave the web.config file open when you run the Visual Studio debugger (F5). The Office Developer Tools for Visual Studio change the ClientId value (not the ida:ClientID) every time you press F5. This requires you to respond to a prompt to reload the web.config file, if it is open, before debugging can execute.

Add a helper class to authenticate to Azure AD

  1. Right-click the ASP.NET project and use the Visual Studio item adding process to add a new class file to the project named AADAuthHelper.cs.
  2. Add the following using statements to the file.
    using Microsoft.IdentityModel.Clients.ActiveDirectory;
    using System.Configuration;
    using System.Web.UI;
    
    
  3. Change the access keyword from public to internal and add the static keyword to the class declaration.
    internal static class AADAuthHelper
    
  4. Add the following fields to the class. These fields store information that your ASP.NET application uses to get access tokens from AAD.
    private static readonly string _authority = ConfigurationManager.AppSettings["Authority"];
    private static readonly string _appRedirectUrl = ConfigurationManager.AppSettings["AppRedirectUrl"];
    private static readonly string _resourceUrl = ConfigurationManager.AppSettings["ResourceUrl"];     
            
    private static readonly ClientCredential _clientCredential = new ClientCredential(
                               ConfigurationManager.AppSettings["ida:ClientID"],
                               ConfigurationManager.AppSettings["ida:ClientKey"]);
    
    private static readonly AuthenticationContext _authenticationContext = 
                new AuthenticationContext("https://login.windows.net/common/" + 
                                          ConfigurationManager.AppSettings["Authority"]);
    
  5. Add the following property to the class. This property holds the URL to the Azure AD login screen.
    private static string AuthorizeUrl
    {
        get
        {
            return string.Format("https://login.windows.net/{0}/oauth2/authorize?response_type=code&redirect_uri={1}&client_id={2}&state={3}",
                _authority,
                _appRedirectUrl,
                _clientCredential.OwnerId,
                Guid.NewGuid().ToString());
        }
    }
    
    
  6. Add the following properties to the class. These cache the access and refresh tokens and check their validity.
    public static Tuple<string, DateTimeOffset> AccessToken
    {
        get {
    return HttpContext.Current.Session["AccessTokenWithExpireTime-" + _resourceUrl] 
           as Tuple<string, DateTimeOffset>;
        }
    
        set { HttpContext.Current.Session["AccessTokenWithExpireTime-" + _resourceUrl] = value; }
    }
    
    private static bool IsAccessTokenValid
    {
       get 
       { 
           return AccessToken != null &&
           !string.IsNullOrEmpty(AccessToken.Item1) &&
           AccessToken.Item2 > DateTimeOffset.UtcNow;
       }
    }
    
    private static string RefreshToken
    {
        get { return HttpContext.Current.Session["RefreshToken" + _resourceUrl] as string; }
        set { HttpContext.Current.Session["RefreshToken-" + _resourceUrl] = value; }
    }
    
    private static bool IsRefreshTokenValid
    {
        get { return !string.IsNullOrEmpty(RefreshToken); }
    }
    
    
  7. Add the following methods to the class. These are used to check the validity of the authorization code and to obtain an access token from Azure AD by using either an authentication code or a refresh token.
    private static bool IsAuthorizationCodeNotNull(string authCode)
    {
        return !string.IsNullOrEmpty(authCode);
    }
    
    private static Tuple<Tuple<string,DateTimeOffset>,string> AcquireTokensUsingAuthCode(string authCode)
    {
        var authResult = _authenticationContext.AcquireTokenByAuthorizationCode(
                    authCode,
                    new Uri(_appRedirectUrl),
                    _clientCredential,
                    _resourceUrl);
    
        return new Tuple<Tuple<string, DateTimeOffset>, string>(
                    new Tuple<string, DateTimeOffset>(authResult.AccessToken, authResult.ExpiresOn), 
                    authResult.RefreshToken);
    }
    
    private static Tuple<string, DateTimeOffset> RenewAccessTokenUsingRefreshToken()
    {
        var authResult = _authenticationContext.AcquireTokenByRefreshToken(
                             RefreshToken,
                             _clientCredential.OwnerId,
                             _clientCredential,
                             _resourceUrl);
    
        return new Tuple<string, DateTimeOffset>(authResult.AccessToken, authResult.ExpiresOn);
    }
    
    
  8. Add the following method to the class. It is called from the ASP.NET code behind to obtain a valid access token before a call is made to get SAP data via SAP Gateway for Microsoft.
    internal static void EnsureValidAccessToken(Page page)
    {
        if (IsAccessTokenValid) 
        {
            return;
        }
        else if (IsRefreshTokenValid) 
        {
            AccessToken = RenewAccessTokenUsingRefreshToken();
            return;
        }
        else if (IsAuthorizationCodeNotNull(page.Request.QueryString["code"]))
        {
            Tuple<Tuple<string, DateTimeOffset>, string> tokens = null;
            try
            {
                tokens = AcquireTokensUsingAuthCode(page.Request.QueryString["code"]);
            }
            catch 
            {
                page.Response.Redirect(AuthorizeUrl);
            }
            AccessToken = tokens.Item1;
            RefreshToken = tokens.Item2;
            return;
        }
        else
        {
            page.Response.Redirect(AuthorizeUrl);
        }
    }
    
Tip Tip
The AADAuthHelper class has only minimal error handling. For a robust, production quality app for SharePoint, add more error handling as described in this MSDN node: Error Handling in OAuth 2.0.

Create data model classes

  1. Create one or more classes to model the data that your app gets from SAP. In the continuing example, there is just one data model class. Right-click the ASP.NET project and use the Visual Studio item adding process to add a new class file to the project named Automobile.cs.
  2. Add the following code to the body of the class:
    public string Price;
    public string Brand;
    public string Model;
    public int Year;
    public string Engine;
    public int MaxPower;
    public string BodyStyle;
    public string Transmission;
    

Add code behind to get data from SAP via the SAP Gateway for Microsoft

  1. Open the Default.aspx.cs file and add the following using statements.
    using System.Net;
    using Newtonsoft.Json.Linq;
    
  2. Add a const declaration to the Default class whose value is the base URL of the SAP OData endpoint that the app will be accessing. The following is an example:
    private const string SAP_ODATA_URL = @"https://<SAP_gateway_domain>.cloudapp.net:8081/perf/sap/opu/odata/sap/ZCAR_POC_SRV/";
    
  3. The Office Developer Tools for Visual Studio have added a Page_PreInit method and a Page_Load method. Comment out the code inside the Page_Load method and comment out the whole Page_Init method. This code is not used in this sample. (If your app for SharePoint is going to access SharePoint, then you restore this code. See Optionally, add SharePoint access to the ASP.NET application.)
  4. Add the following line to the top of the Page_Load method. This will ease the process of debugging because your ASP.NET application is communicating with SAP Gateway for Microsoft using SSL (HTTPS); but your “localhost:port” server is not configured to trust the certificate of SAP Gateway for Microsoft. Without this line of code, you would get an invalid certificate warning before Default.aspx will open. Some browsers allow you to click past this error, but some will not let you open Default.aspx at all.
    ServicePointManager.ServerCertificateValidationCallback = (s, cert, chain, errors) => true;
    
    Important noteImportant
    Delete this line when you are ready to deploy the ASP.NET application to staging. See Modify the app and stage it to Azure and Office 365.
  5. Add the following code to the Page_Load method. The string you pass to the GetSAPData method is an OData query.
    if (!IsPostBack)
    {
        GetSAPData("DataCollection?$top=3");
    }
    
    
  6. Add the following method to the Default class. This method first ensures that the cache for the access token has a valid access token in it that has been obtained from Azure AD. It then creates an HTTP GET Request that includes the access token and sends it to the SAP OData endpoint. The result is returned as a JSON object that is converted to a .NET List object. Three properties of the items are used in an array that is bound to the DataListView.
    private void GetSAPData(string oDataQuery)
    {
        AADAuthHelper.EnsureValidAccessToken(this);
    
        using (WebClient client = new WebClient())
        {
            client.Headers[HttpRequestHeader.Accept] = "application/json";
            client.Headers[HttpRequestHeader.Authorization] = "Bearer " + AADAuthHelper.AccessToken.Item1;
            var jsonString = client.DownloadString(SAP_ODATA_URL + oDataQuery);
            var jsonValue = JObject.Parse(jsonString)["d"]["results"];
            var dataCol = jsonValue.ToObject<List<Automobile>>();
    
            var dataList = dataCol.Select((item) => {
                return item.Brand + " " + item.Model + " " + item.Price;
                }).ToArray();
    
            DataListView.DataSource = dataList;
            DataListView.DataBind();
        }
    }
    
    

Create the user interface

  1. Open the Default.aspx file and add the following markup to the form of the page:
    <div>
      <h3>Data from SAP via SAP Gateway for Microsoft</h3>
    
      <asp:ListView runat="server" ID="DataListView">
        <ItemTemplate>
          <tr runat="server">
            <td runat="server">
              <asp:Label ID="DataLabel" runat="server"
                Text="<%# Container.DataItem.ToString()%>" /><br />
            </td>
          </tr>
        </ItemTemplate>
      </asp:ListView>
    </div>
    
  2. Optionally, give the web page the “look ‘n’ feel” of a SharePoint page with the SharePoint Chrome Control and the host SharePoint website’s style sheet.

Test the app with F5 in Visual Studio

  1. Press F5 in Visual Studio.
  2. The first time that you use F5, you may be prompted to login to the Developer Site that you are using. Use the site administrator credentials. In the continuing example, it is Bob@<O365_domain>.onmicrosoft.com.
  3. The first time that you use F5, you are prompted to grant permissions to the app. Click Trust It.
  4. After a brief delay while the access token is being obtained, the Default.aspx page opens. Verify that the SAP data appears.

Optionally, add SharePoint access to the ASP.NET application


Of course, your app for SharePoint doesn’t have to expose only SAP data in a web page launched from SharePoint. It can also create, read, update, and delete (CRUD) SharePoint data. Your code behind can do this using either the SharePoint client object model (CSOM) or the REST APIs of SharePoint. The CSOM is deployed as a pair of assemblies that the Office Developer Tools for Visual Studio automatically included in the ASP.NET project and set to Copy Local in Visual Studio so that they are included in the ASP.NET application package. For information about using CSOM, start with How to: Complete basic operations using SharePoint 2013 client library code. For information about using the REST APIs, start with Understanding and Using the SharePoint 2013 REST Interface.Regardless, of whether you use CSOM or the REST APIs to access SharePoint, your ASP.NET application must get an access token to SharePoint, just as it does to SAP Gateway for Microsoft. See Understand authentication and authorization to SAP Gateway for Microsoft and SharePoint above. The procedure below provides some basic guidance about how to do this, but we recommend that you first read the following articles:

  1. Open the Default.aspx.cs file and uncomment the Page_PreInit method. Also uncomment the code that the Office Developer Tools for Visual Studio added to the Page_Load method.
  2. If your app for SharePoint is going to access SharePoint data, then you have to cache the SharePoint context token that is POSTed to the Default.aspx page when the app is launched in SharePoint. This is to ensure that the SharePoint context token is not lost when the browser is redirected following the Azure AD authentication. (You have several options for how to cache this context. See OAuth tokens.) The Office Developer Tools for Visual Studio add a SharePointContext.cs file to the ASP.NET project that does most of the work. To use the session cache, you simply add the following code inside the “if (!IsPostBack)” block before the code that calls out to SAP Gateway for Microsoft:
    if (HttpContext.Current.Session["SharePointContext"] == null) 
    {
         HttpContext.Current.Session["SharePointContext"]
            = SharePointContextProvider.Current.GetSharePointContext(Context);
    }
    
  3. The SharePointContext.cs file makes calls to another file that the Office Developer Tools for Visual Studio added to the project: TokenHelper.cs. This file provides most of the code needed to obtain and use access tokens for SharePoint. However, it does not provide any code for renewing an expired access token or an expired refresh token. Nor does it contain any token caching code. For a production quality app for SharePoint, you need to add such code. The caching logic in the preceding step is an example. Your code should also cache the access token and reuse it until it expires. When the access token is expired, your code should use the refresh token to get a new access token. We recommend that you read OAuth tokens.
  4. Add the data calls to SharePoint using either CSOM or REST. The following example is a modification of CSOM code that Office Developer Tools for Visual Studio adds to the Page_Load method. In this example, the code has been moved to a separate method and it begins by retrieving the cached context token.
    Copy
    private void GetSharePointTitle()
    {
        var spContext = HttpContext.Current.Session["SharePointContext"] as SharePointContext;
        using (var clientContext = spContext.CreateUserClientContextForSPHost())
        {
            clientContext.Load(clientContext.Web, web => web.Title);
            clientContext.ExecuteQuery();
            SharePointTitle.Text = "SharePoint web site title is: " + clientContext.Web.Title;
        }
    }
    
  5. Add UI elements to render the SharePoint data. The following shows the HTML control that is referenced in the preceding method:
    <h3>SharePoint title</h3><asp:Label ID="SharePointTitle" runat="server"></asp:Label><br />
    
Note Note
While you are debugging the app for SharePoint, the Office Developer Tools for Visual Studio re-register it with Azure ACS each time you press F5 in Visual Studio. When you stage the app for SharePoint, you have to give it a long-term registration. See the section Modify the app and stage it to Azure and Office 365.

Modify the app and stage it to Azure and Office 365


When you have finished debugging the app for SharePoint using F5 in Visual Studio, you need to deploy the ASP.NET application to an actual Azure Web Site.

Create the Azure Web Site

  1. In the Microsoft Azure portal, open WEB SITES on the left navigation bar.
  2. Click NEW at the bottom of the page and on the NEW dialog select WEB SITE | QUICK CREATE.
  3. Enter a domain name and click CREATE WEB SITE. Make a copy of the new site’s URL. It will have the form my_domain.azurewebsites.net.

Modify the code and markup in the application

  1. In Visual Studio, remove the line ServicePointManager.ServerCertificateValidationCallback = (s, cert, chain, errors) => true; from the Default.aspx.cs file.
  2. Open the web.config file of the ASP.NET project and change the domain part of the value of the AppRedirectUrl key in the appSettings section to the domain of the Azure Web Site. For example, change <add key=”AppRedirectUrl” value=”https://localhost:44322/Pages/Default.aspx&#8221; /> to <add key=”AppRedirectUrl” value=”https://my_domain.azurewebsites.net/Pages/Default.aspx&#8221; />.
  3. Right-click the AppManifest.xml file in the app for SharePoint project and select View Code.
  4. In the StartPage value, replace the string ~remoteAppUrl with the full domain of the Azure Web Site including the protocol; for example https://my_domain.azurewebsites.net. The entire StartPage value should now be: https://my_domain.azurewebsites.net/Pages/Default.aspx. (Usually, the StartPage value is exactly the same as the value of the AppRedirectUrl key in the web.config file.)

Modify the AAD registration and register the app with ACS

  1. Login into Azure Management portal with your Azure administrator account.
  2. Choose Active Directory on the left side.
  3. Click on your directory.
  4. Choose APPLICATIONS (on the top navigation bar).
  5. Open the application you created. In the continuing example, it is ContosoAutomobileCollection.
  6. For each of the following values, change the “localhost:port” part of the value to the domain of your new Azure Web Site:
    • SIGN-ON URL
    • APP ID URI
    • REPLY URL

    For example, if the APP ID URI is https://localhost:44304/ContosoAutomobileCollection, change it to https://<my_domain&gt;.azurewebsites.net/ContosoAutomobileCollection.

  7. Click SAVE at the bottom of the screen.
  8. Register the app with Azure ACS. This must be done even if the app does not access SharePoint and will not use tokens from ACS, because the same process also registers the app with the App Management Service of the Office 365 subscription, which is a requirement. You perform the registration on the AppRegNew.aspx page of any SharePoint website in the Office 365 subscription. For details, see Guidelines for registering apps for SharePoint 2013. As part of this process you will obtain a new client ID and client secret. Insert these values in the web.config for the ClientId (not ida:ClientID) and ClientSecret keys.
    Caution note Caution
    If for any reason you press F5 after making this change, the Office Developer Tools for Visual Studio will overwrite one or both of these values. For that reason, you should keep a record of the values obtained with AppRegNew.aspx and always verify that the values in the web.config are correct just before you publish the ASP.NET application.

Publish the ASP.NET application to Azure and install the app to SharePoint

  1. There are several ways to publish an ASP.NET application to an Azure Web Site. For more information, see How to Deploy an Azure Web Site.
  2. In Visual Studio, right-click the SharePoint app project and select Package. On the Publish your app page that opens, click Package the app. File explorer opens to the folder with the app package.
  3. Login to Office 365 as a global administrator, and navigate to the organization app catalog site collection. (If there isn’t one, create it. See Use the App Catalog to make custom business apps available for your SharePoint Online environment.)
  4. Upload the app package to the app catalog.
  5. Navigate to the Site Contents page of any website in the subscription and click add an app.
  6. On the Your Apps page, scroll to the Apps you can add section and click the icon for your app.
  7. After the app has installed, click it’s icon on the Site Contents page to launch the app.

For more information about installing apps for SharePoint, see Deploying and installing apps for SharePoint: methods and options.

Deploying the app to production


When you have finished all testing you can deploy the app in production. This may require some changes.

  1. If the production domain of the ASP.NET application is different from the staging domain, you will have to change AppRedirectUrl value in the web.config and the StartPage value in the AppManifest.xml file, and repackage the app for SharePoint. See the procedure Modify the code and markup in the application above.
  2. The change in domain also requires that you edit the apps registration with AAD. See the procedure Modify the AAD registration and register the app with ACS above.
  3. The change in domain also requires that you re-register the app with ACS (and the subscription’s App Management Service) as described in the same procedure. (There is no way to edit an app’s registration with ACS.) However, it is not necessary to generate a new client ID or client secret on the AppRegNew.aspx page. You can copy the original values from the ClientId (not ida:ClientID) and ClientSecret keys of the web.config into the AppRegNew form. If you do generate new ones, be sure to copy the new values to the keys in web.config.

The Hardest thing I ever had to do……..

I am sure everybody that ever saw the movie with Will Smith, The Persuit of Happiness, will remember all the heart-breaking scenes from the movie.

For Chris Gardner, the real life inspirational person Will Smith portrays, the scene where he and his son has nowhere to sleep except a bathroom was so hard, he couldn’t be on the movie set when the scene was shot.

Now I didn’t say this is going to be the hardest thing I ever wrote for nothing……  Remember that bathroom scene, now replace Will Smith just with me and my laptop – This is being written from within a McDonalds bathroom stall, where I can have Wifi access.

What I am going to reveal here will maybe shock some, give satisfaction to those who think I did them harm, will mostly be read with pity and then you move on with your life, but maybe. just maybe, there will be 1 person who reads this and extends his/her hand of help and assistance.

A few months ago I moved back to my mother’s in order to take care of her – She became addicted, and still is, to codeine and benzodiazipines (pain killers and axiety tablits).

I have a sister who is 10 years younger than me who I haven’t seen in more than a year – I don’t even know how or where to contact her, as she got a restraining order against my mother, after she and her boyfriend was both attacked by my mother in a drug induced pshycosis.

My brother, 3 years younger than me, is in the process of applying for a restraining order against my mother after the same happened to him and he was attacked (he has been forced into a life where he has no contact with anyone of us anymore).

We grew up tough. I would say tougher than most…. My father mentally, physically and emotionally abused my mother and emotionally and mentally us 3 children,

This happened as far as I can think back, so for at least from the age of 10 did I witness this abuse day in and out, the same with my brother and to a degree my sister (Thank God that they are divorced, but even he has a restraining order against my mother).

I hope this starts to paint for you the picture of where I came from……..

When I moved back home, it was to try and help my mother get off the drugs and get her back on her feet.

In this process I loaned her 50K for a new car.

This week things came to a point where I felt no longer safe to live with my mother anymore, when she attacked me with a knife (my cell phone broke in defending myself, but the scars are here for anyone to see).

I now literally have a few items of clothing and my laptop, moving from wifi hotspot to hotspot as I am trying to keep my dream alive.

I have slept next the train tracks, in storage facilities, in empty buildings, hospitals, toilets……..

BUT this is not a plea for pity, please dont get that wrong!

I am a Senior C# & SharePoint Developer with 10 years experience and a BSC degree I completed with distinction.

My mother is however refusing to pay back the money she owes me (although her old car is standing at her flat, it just needs to be sold – A red Fiat Panda)

She is also refusing me access to my cell phone and to my car – which makes it impossible for me to find a stable job.

Those of you wondering – Well, why not take what is yours? My mother had me locked up by the SAPS when I did it in the past and is threatening to do it again (She has the Sinoville police completely wrapped around her pinkie finger)

ALL I am asking of you is – Please go take a look at my web site : https://sharepointsamurai.wordpress.com/ – I am even giving away a free SharePoint Corporate Calendar roll-up App for those who are just willing to take a look at my CV and skills available to see on my web site.

ALL I am asking is – Please give me a chance. Please offer me a job. I will work for shelter and food – Development has always been my passion, not money.

I am not even thus asking for a salary, though I can easily earn 40K a month – ALL I am asking for is to have empathy and compassion and to give me a chance to get back on my feet again……

Please see my twitter feed as well – This is not some fairy tale story, this is real-life and this is survival for me

It was indeed the hardest thing I ever had to do, but I have got literally nothing on my name but I still have my dream and passion, the ONLY thing I ask for is a chance please……

(I can be contacted through my web site https://sharepointsamurai.wordpress.com/ or at tomas.floyd@outlook.com)

Thank you very much for taking the time to read this – Please head over to my web page and download a great App with all the source code and please take the time to see the talent and passion in me as you go through my code………

This is me at my most vulnerable, it isn’t easy for a man that prouds himself in everything he does to ask for help like this, but I have a dream and fire still inside, and a hope that there is that 1 person outside there

 

FREE Event Aggregated Corporate Calendar available for download – http://1drv.ms/1uN8omc

How To : Understanding and Use the Search logic for Silverlight controls in Coded UI Test

Understanding the Search logic for Silverlight controls in Coded UI Test

 

One of the primary objectives during recording in Coded UI Test is to generate a robust search condition for a UI control to be uniquely identifiable during playback. In this post I’ll mention some of the search logic specific to the Silverlight UI Automation support within Coded UI Test introduced in the VS 2010 Feature Pack 2.

Search condition generation during Recording

 

For Silverlight control, Coded UI Test relies primarily on the Automation properties of the control. The sequence of looking for a search property in order descending of priority is

AutomationId,

Name,

LabeledBy,

HelpText,

AccessKey,

AcceleratorKey

Specific controls support additional searchable properties. For instance, Button supports  “DisplayText”, Image supports “Source”, DataGrid Cell supports “ColumnIndex” searchable property and likewise. The various search configurations mentioned here are applicable to Silverlight control search too (except for the SearchConfiguration.VisibleOnly configuration).

microsoft-silverlight[1]

For a Silverlight object hosted in IE, the search hierarchy will consist of an IE search part and Silverlight search part –

 

Top Level Window à {IE Search Hierarchy} à Silverlight Root Visual Element à Parent of Target Element à Target Element.

 

Additional hierarchy can be generated in between the Parent and Silverlight Root Visual element based on the specific control requirement. For example, certain controls such as Datagrid, Tree, TreeItem, Tab, List Item are, at almost all times, included in the search hierarchy if they are found in the ancestor hierarchy of the target element. As an example, the extended search hierarchy of a DataGrid Cell will show up as something like

 

TopLevelWindow à {IE Search Hierarchy} à Root Visual Element (Silverlight) à DataGridTable (Silverlight) à DataGridRow(Silverlight) à DataGridCell (Silverlight)

 

 

Search path during Playback

 

The overall search logic remains identical to what is followed in other UI technologies. It is a breadth first search wherein the top level window is first searched and used as a container for searching the next control in the search condition hierarchy. This is done recursively until the leaf control in the search hierarchy is found.

 

As an example, for a simple button inside a Silverlight page, the search hierarchy will be typically of the format –

 

TopLevelWindow à Document (IE BODY Tag) à Pane (IE DIV Tag) à Custom (IE OBJECT Tag) à Root Visual Element (Silverlight) à Button (Silverlight).

 

For the top-down search till the IE Object Tag, the existing search features and settings in Coded UI Test are applicable. Once the search switches to the Silverlight technology (i.e. Root Visual element of the Silverlight page), there are few limitations to the search.

microsoft-silverlight2-developer-reference[1]

 

What is missing currently in Silverlight control search?

 

  1. PlaybackSettings.ShouldSearchFailFast
  • This setting is not honored currently. However, there is some level of customization that can be done using the PlaybackSettings.SearchTimeout and the Playback.PlaybackSettings.WaitForReadyTimeout settings to tweak the timeout at which the search should abort. The later may not seem obvious, and is hence explained in more detail in a section below.

 

  1. PlaybackSettings.MatchExactHierarchy

Silverlight control search does not currently honor  MatchExactHierarchy = false. So, the search condition specified for the entire Silverlight hierarchy needs to be accurate for the search to succeed. In the above example, it is the Root Visual Element and the Button control.

 

  1. Playback.PlaybackSettings.SmartMatchOptions
  • Control level smart match is not currently supported i.e. Control

Note that Regex match is not yet supported in Coded UI Test. So the only option available is to specify the PropertyExpressionOperator.Contains condition operator in the search properties.

For example, if the button’s name is of format “Submit<SomeDynamicId>”, the search property can be defined as –

uISubmitButton.SearchProperties.Add(“Name”, “Submit”, PropertyExpressionOperator.Contains);

 

  1. There is no concept of FilterProperties as supported in Web Technology in Coded UI Test.

 

 

How to handle search failures because of slow page loading?

 

If the XAP download takes a huge amount of time to load, the search during playback would fail since the Silverlight controls will not have been rendered in the visual tree. The internal search algorithm uses a wait and retry logic to search for a control while checking the visual tree rendering status at each wait interval (this time interval is upped exponentially on each iteration).

I will not be explaining the details here, but the important thing to note is that if there is no rendering happening within a polling interval, the search will return with failure status.

This polling interval is currently set to half of the Playback.PlaybackSettings.WaitForReadyTimeout which has a default value of 60 seconds (i.e. the default polling interval is 30 seconds). So to tackle slow page loading time, you can configure this Playback.PlaybackSettings.WaitForReadyTimeout to a desired value.

Note: Playback.PlaybackSettings.WaitForReadyTimeout does affect the normal search failure time in scenarios where there is some visual rendering happening in the Silverlight page. So you would need to strike an appropriate balance based on the type of application you are testing.

How To : Add a Promoted Links Web Part to SharePoint 2013 App Default page

This article helps you to add Promoted links web part to your default app page as the following figure:

 

To do this follow the following steps:
Open the shortcut menu for the project, and then choose Add, New Item
Add Picture Textbox, and two buttons to infopath form

 

In the Templates pane, choose the List template, and then choose the Add button :

Enter list name and choose the Create a non-customizable list based on an existing list type of option button, and then, in its list, choose Promoted links, and then choose the Finish button

Binding the CAPTCHA image
In Solution Explorer, under the list instance node, open the Elements.xml file.
Add the promoted links items as the following:
<?versionencodingutf-8?>
Elementsxmlnshttp://schemas.microsoft.com/sharepoint/
ListInstanceTitleMyPromotedLinks
OnQuickLaunch
TemplateType
FeatureId192efa95-e50c-475e-87ab-361cede5dd7f
Lists/MyPromotedLinks
DescriptionMy List Instance
FieldTitleTwitter</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/twitter.png
FieldDescriptionMuawiyah Shannak Twitter
FieldLinkLocationhttps://twitter.com/MuShannak</Field
FieldOrder</Field
</
FieldTitle</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/blogger.png
FieldDescriptionMuawiyah Shannak Blog
FieldLinkLocationhttp://mushannak.blogspot.com</Field
FieldOrder</Field
</
FieldTitleLinkedin</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/linkedin.png
FieldDescriptionMuawiyah Shannak Linkedin
FieldLinkLocationhttp://ae.linkedin.com/in/shannak</Field
FieldOrder</Field
</
</
</
<!–ListInstance
</Elements
In Solution Explorer, under the Pages node, open the Default.aspx file. Add following tags inside the PlaceHolderMain Place Holder:
WebPartPagesWebPartZone=”WebPartZone”runat=”server”FrameType=”None”>
WebPartPagesXsltListViewWebPart=”XsltListViewAppPromotedList”
runat=”server”ListUrl=”Lists/MyPromotedLinks”IsIncluded=”True”
NoDefaultStyle=”TRUE”Title=”Images used in switcher”
PageType=”PAGE_NORMALVIEW”Default=”False”
ViewContentTypeId=”0x”>
</WebPartPagesXsltListViewWebPart
</WebPartPagesWebPartZone

Deploy a solution and you will find nice promoted links web part in the app default page!

How To : Manually add common consent to your Office 365 APIs Preview app

Windows_Azure_Wallpaper_p754[1]office365logoorange_web[1]

 

Learn how to manually add Microsoft Azure Active Directory common consent to your ASP.NET application so that it can access secured services.

Prerelease content Prerelease content
The features and APIs documented in this article are in preview and are subject to change. Do not use them in production.

In this article, you’ll learn how to build a web application hosted on an Azure website that uses the OneDrive for Business API to access secured folders and files.

You can easily set up access to the OneDrive for Business using the Office 365 API Preview Tools for Visual Studio 2013. If you’re not using the tools, you’ll need to manually set up your app in your development environment, register your app with Microsoft Azure Active Directory, write code to handle tokens, and write the code to work with the OneDrive for Business resources. All these steps are described in this article.

Note Note
This article covers OneDrive for Business apps, but the same steps apply to apps that access any other secured resource.

Before you manually add common consent to your app, make sure that you have the following:

  • An Office 365 account. If you don’t have one, you can sign up for an Office 365 developer site.
  • Visual Studio 2012 or Visual Studio 2013.
    Note Note
    The Office 365 API Preview Tools for Visual Studio 2013, which simplify development, are available for Visual Studio 2013 only.
  • A test account to use in your application.

We also recommend that you familiarize yourself with the Authorization Code Grant Flow. This will help you understand the authentication process that takes place in the background between your application, Azure AD, and the Office 365 resource so that you can better troubleshoot as you develop.

If you’ve already created an account within your Azure tenancy, you can use that account. Otherwise, you will have to create a new organizational user account to use in this sample.

To create an organizational user account

  1. Go to https://manage.windowsazure.com/.
  2. Choose the Active Directory icon on the left side in the Azure portal.
  3. Choose Add a user.
  4. Fill in the user name.
  5. Move to the next screen.
  6. Create a user profile. To do this:
    1. Enter a first and last name.
    2. Enter a display name.
    3. Set the Role to Global Administrator.
    4. After you set the role you will be asked for an alternate email address. You can enter the email address that you used to create the subscription, or a different one.
  7. Move to the next screen.
  8. Choose create.
  9. A temporary password is generated. You will use this to sign in later. You will have to change it at that time.
  10. Choose the check mark to finish creating the organizational user account.

The next step is to create the actual app that contains the UI and code needed to work with the OneDrive for Business REST APIs to list the folders and files in the user’s OneDrive.

To create the Visual Studio project

  1. Open Visual Studio 2013 and create a new ASP.NET Web Application project. Name the application Get_Stats. Choose OK.
  2. Choose the MVC template and choose the Change Authentication button. Select the Organization Accounts option. This will display additional options for authentication.
  3. Choose Cloud – Single Organization.
  4. Specify the domain of your Azure AD tenancy.
  5. Set the Access Level to Single Sign On, Read directory data.

    Under More Options, you will see the App ID URI is set automatically.

  6. Choose OK to continue. This brings up a dialog box to authenticate.
    Note Note
    If you receive an invalid domain name error, you might need to implement a workaround by substituting a real domain name, such as *.onmicrosoft.com, from another Azure subscription that you have. When you complete the Visual Studio new project dialog box, Visual Studio creates a temporary app registration entry on the domain that you specify. You can delete that entry later.

    As part of the workaround, you need to adjust the web.config settings and manually register the web app in the correct Azure AD domain.

  7. Enter the credentials of the user you created earlier.
  8. Choose OK to finish creating the new project. Visual Studio will automatically register the new web app in the Azure AD tenant you specified.
  9. Run the Visual Studio project, and sign on using the test account you created earlier. After the project is running, you can verify that single-sign on is working because the test account user name is displayed in the upper right corner of the web app.
  1. Log on with your Azure account.
  2. In the left navigation, choose Active Directory. Your directory will be listed.
  3. Choose your directory.
  4. In the top navigation, choose Applications.
  5. On the Active Directory tab, choose Applications.
  6. Add a new application in your Office 365 domain (created at Office 365 sign up) by choosing the “ADD” icon at the bottom of the portal screen. This will bring up a dialog box to tell Azure about your application.
  7. Choose Add an application my organization is developing.
  8. For the name of the application, enter Get Stats. For the Type, leave Web application and/or Web API. Then choose the arrow to move to step 2.
  9. For the Sign-On URL, enter the localhost URL from your Get_Stats Visual Studio project. To find the URL:
    1. Open your project in Visual Studio.
    2. In Solution Explorer, choose the Get_Status project.
    3. From the Properties window, copy the SSL URL value.
    4. Enter an App ID URI. Because the ID must be unique, it’s a good idea to choose a name that is similar to the app name. For example, you can use your Sign-on URL with your app name, such as https://locahost:44044/Get_Stats.
    5. Choose the checkmark to finish adding the application. You will be notified that the application was added successfully.
  1. Copy the APP ID URI to the clipboard.
  2. In your Get_Stats Visual Studio project, open the web.config file.
  3. Locate the ida:Realm key and paste the APP ID URI for the value.
  4. Locate the ida:AudienceUri key and paste the same APP ID URI for the value.
  5. Locate the audienceUris element and paste the same APP ID URI for the add element’s value.
  6. Locate the wsFederation element, and paste the same APP ID URI for the realm.
  7. In the Azure Portal, copy the federation metadata document URL to the clipboard.
  8. In the web.config file, locate the ida:FederationMetadataLocation key, and paste the URL for the value.
  9. In the Azure Portal, choose the View Endpoints icon at the bottom.
  10. Copy the WS-Federation Sign-On Endpoint to the clipboard.
  11. In the web.config file, locate the wsFederation element and paste the endpoint value for the issuer.
  12. Save your changes and run the project. You will be prompted to sign on. Sign on by using the test account you created earlier. You should see your account user name displayed in the upper right corner of the web app.

Get an application key


Next, you need to generate a key that you can use to identify your application for access tokens.

To get an application key for your app

  1. In the Azure Portal, select the Get_Stats application in the directory.
  2. Choose the Configure command and then locate the keys section.
  3. In the Select duration drop-down box, choose 1 year.
  4. Choose Save.

    The key value is displayed.

    Note Note
    This is the only time that the key is displayed.
  5. In Visual Studio, open the Get_Stats project, and open the web.config file.
  6. Locate the ida:Password element, and paste the key value for the value. Now your project will always send the correct password when it is requested.
  7. Save all files.
Configure API permissions


You need to specify which web APIs your web app needs access to, and what level of access it needs. This determines what scopes and permissions are requested on the consent form for your web app that is displayed for users and admins.

To configure API permissions

  1. In the Azure Portal, select the Get Status application in the directory.
  2. From the top navigation, choose Configure. This displays all the configuration properties.
  3. At the bottom is a web apis section. Notice that your web app has already been granted access to Azure AD.
  4. Choose Office365 SharePoint Online API.
  5. Choose Delegated Permissions and select Read items in all site collections.
    Note Note
    The options activate when you move over them.
  6. Choose Save to save these changes. Your web app will now request these permissions.

    You can also manage permissions by using a manifest. You can download your manifest file by choosing Manage Manifest.

Add the GraphHelper project to your solution


The easiest way to call graph APIs in Azure AD is to use the Graph API Helper Library. The following instructions show how to include the GraphHelper project into your Get_Stats solution.

To configure the Graph API Helper Library

  1. Download the Azure AD Graph API Helper Library.
  2. Copy the C# folder from the Graph API Helper Library to your project folder (i.e. \Projects\Get_Stats\C#.)
  3. Open the Get_Stats solution in Visual Studio.
  4. In the Solution Explorer, choose the Get-Stats solution and choose Add Existing Project.
  5. Go to the C# folder you copied, and open the WindowsAzure.AD.Graph.2013_04_05 folder.
  6. Select the Microsoft.WindowsAzure.ActiveDirectory.GraphHelper project and choose Open.
  7. If you are prompted with a security warning about adding the project, choose OK to indicate that you trust the project.
  8. Choose the Get_Stats project References folder and then choose Add Reference.
  9. In the Reference Manager dialog box, select Extensions and then select the Microsoft.Data.OData version 5.6.0.0 assembly and the Microsoft.Data.Services.Client version 5.6.0.0 assembly.
  10. In the same Reference Manager dialog box, expand the Solution menu on the left, and then select the checkbox for the Microsoft.WindowsAzure.ActiveDirectory.GraphHelper.
  11. Choose OK to add the references to your project.
  12. Add the following using directives to the top of HomeController.cs.
    using Microsoft.WindowsAzure.ActiveDirectory;
    using Microsoft.WindowsAzure.ActiveDirectory.GraphHelper;
    using System.Data.Services.Client;
    
    
  13. Save all files.

Add code to manage tokens and requests


Because your web app accesses multiple workloads, you need to write some code to obtain tokens. It’s best to place this code in some helper methods that can be called when needed.Note that the Office 365 API Preview tools will handle all this coding for you.

Your custom code handles the following scenarios:

  • Obtaining an authentication code
  • Using the authentication code to obtain an access token and a multiple resource refresh token
  • Using the multiple resource refresh token to obtain a new access token for a new workload

To create code to manage tokens and requests

  1. Open your Visual Studio project for Get_Stats.
  2. Open the HomeController.cs file.
  3. Create a new method named Stats by using the following code.
    public ActionResult Stats()
    {
        var authorizationEndpoint = "https://login.windows.net/"; // The oauth2 endpoint.
        var resource = "https://graph.windows.net"; // Request access to the AD graph resource.
        var redirectURI = ""; // The URL where the authorization code is sent on redirect.
    
        // Create a request for an authorization code.
        string authorizationUrl = string.Format("{1}common/oauth2/authorize?&response_type=code&client_id={2}&resource={3}&redirect_uri={4}",
               authorizationEndpoint,
               ClaimsPrincipal.Current.FindFirst(TenantIdClaimType).Value,
               AppPrincipalId,
               resource,
               redirectURI);
    
    
  4. The Stats method constructs a request for an authorization code and sends the request to the Oauth2 endpoint. If successful, the redirect returns to the specified CatchCode URL. Next, create a method to handle the redirect to CatchCode.
    public ActionResult CatchCode(string code)
    {}
    
    
  5. Acquire the access token by using the app credentials and the authorization code. Use your project’s correct port number in the following code.
    //  Replace the following port with the correct port number from your own project.
        var appRedirect = "https://localhost:44307/Home/CatchCode";
    
    //  Create an authentication context.
        AuthenticationContext ac = new AuthenticationContext(string.Format("https://login.windows.net/{0}",
        ClaimsPrincipal.Current.FindFirst(TenantIdClaimType).Value));
    
    //  Create a client credential based on the application ID and secret.
    ClientCredential clcred = new ClientCredential(AppPrincipalId, AppKey);
    
    //  Use the authorization code to acquire an access token.
        var arAD = ac.AcquireTokenByAuthorizationCode(code, new Uri(appRedirect), clcred);
    
    
  6. Next use the access token to call the Graph API and get the list of users for the Office 365 tenant. Paste the list into the following code.
    //  Convert token to the ADToken so you can use it in the graphhelper project.
    
        AADJWTToken token = new AADJWTToken();
        token.AccessToken = arAD.AccessToken; 
    
    //  Initialize a graphService instance by using the token acquired in the previous step.
    
        Microsoft.WindowsAzure.ActiveDirectory.DirectoryDataService graphService = new DirectoryDataService("09f9ea02-9be8-4597-86b9-32935a17723e", token);
        graphService.BaseUri = new Uri("https://graph.windows.net/09f9ea02-9be8-4597-86b9-32935a17723e");
    
    //  Get the list of all users.
    
        var users = graphService.users;
        QueryOperationResponse<Microsoft.WindowsAzure.ActiveDirectory.User> response;
        response = users.Execute() as QueryOperationResponse<Microsoft.WindowsAzure.ActiveDirectory.User>;
        List<Microsoft.WindowsAzure.ActiveDirectory.User> userList = response.ToList();
        ViewBag.userList = userList; 
    
    
  7. Now you need to call Microsoft OneDrive for Business, and this requires a new access token. Verify that the current token is a multiple resource refresh token, and then use it to obtain a new token. Paste the token into the following code.
    //  You need a new access token for new workload. Check to determine whether you have the MRRT.
    
        if (arAD.IsMultipleResourceRefreshToken)
        {
            // This is an MRRT so use it to request an access token for SharePoint.
            AuthenticationResult arSP = ac.AcquireTokenByRefreshToken(arAD.RefreshToken, AppPrincipalId, clcred, "https://imgeeky.spo.com/");
        }
    
    
  8. Finally, call Microsoft OneDrive for Business to get a list of files in the Shared with Everyone folder. Paste the list into the following code and replace any placeholders with correct values.
    //  Now make a call to get a list of all files in a folder. 
    //  Replace placeholders in the following string with correct values for your domain and user name. 
        var skyGetAllFilesCommand = "https://YourO365Domain-my.spo.com/personal/YourUserName_YourO365domain_spo_com/_api/web/GetFolderByServerRelativeUrl('/personal/YourUserName_YourO365domain_spo_com/Documents/Shared%20with%20Everyone')/Files";
    
        HttpWebRequest request = (HttpWebRequest)WebRequest.Create(skyGetAllFilesCommand);
        request.Method = "GET";
    
        WebResponse wr = request.GetResponse();
    
        ViewBag.test = wr.ToString();
    
        return View(); 
    
    
  9. Create a view for the CatchCode method. In Solution Explorer, expand the Views folder and choose Home, and then choose Add View.
  10. Enter CatchCode as the name of the new view, and choose Add.
  11. Paste the following HTML to render the users and Microsoft OneDrive for Business response from the CatchCode method.
    @{
        ViewBag.Title = "CatchCode";
    }
    <h2>Users</h2>
    <ul id="users">
    
        @foreach (var user in ViewBag.userList)
        {
            <li>@user.displayName</li>
        }
    </ul>
    <h2>OneDrive for Business Response</h2>
    <p>@ViewBag.skyResponse</p>
    
    
  12. Build and run the solution. Verify that you get a list of users, and that you get an XML response from the OneDrive for Business method call. To change the XML response, add files to the OneDrive for Business Share with Everyone folder.

In Depth : Top 5 DevOps Best Practices for Achieving Security, Scalability and Performance

 

 

DevOps-Cloud[1]

As Companies and Consultants like me continue its look into how best to invest DevOps related time and money, the focus is now changing in order to relate back some of the various best practices experts in the industry have suggested will help create scalable, secure and high performance deployments.

Here are five tips and best practices that have emerged from the hands on experiences of the industry’s foremost experts that I use with every DevOps Consulting project :

 

1. Be vigilant of overall security risk

Reuven Harrison, CTO and Co-Founder of Tufin emphasizes the growing complexities of networks. He says that increased adoption of virtualization, cloud, BYOD and emerging technologies like software defined networks (SDNs) means that networks are becoming more complex and heterogeneous, and so do the security risks.

“As SDN and network virtualization continues to mature, the only way to manage these networks with any degree of efficiency and security is to automate key management functions,” he says. “That is the premise of DevOps,.

 

But DevOps must include security as a key component because without it, the volume and pace of network change that technologies such as SDN and virtualization introduce will skyrocket the level of IT risk in the environment.”

4503.DevOps_2D00_Barriers_5F00_1C41B571[1]

The big challenge is that to date, security has been considered an afterthought, and security organizations are considered to be business inhibitors, telling organizations what can’t be done, instead of how to do things securely. 

It is a cultural issue that requires security, developers, and operations teams to foster a level of trust and collaboration that does not yet exist. The only way to do this is incrementally, and with vigilance.

 

2. Watch changes in security risk

Torsten Volk, VP Product Management, Cloud for ASG Software Solutions says that it is important to think of DevOps as a collaborative mindset and process that leads developers and IT operations to a faster and more efficient way of deploying, operating and upgrading applications.

“Each new release comes with the same set of security considerations as it did the time before DevOps,” he says. “However, when new releases are delivered at a much higher cadence, security has to also be an ongoing point of focus.”

devops[1]

DevOps tools help in this regard by proactively ensuring consistent configuration of infrastructure and software components. Even more, these tools can be used to automatically remediate security concerns by constantly validating the proper application of security best practices and taking automatic countermeasures.

While this latter scenario might sound advanced, it is the endpoint that every DevOps team should aspire to reach.

 

3. Pay attention to scalability

According to Aaron M. Lee, Managing Principal of DevOps at Pythian, there are two kinds of scalability that DevOps engineers tend to address: application and organization.

“An app’s scalability is really a question of how long it takes and how much it costs to build and operate a system that successfully delivers a certain level of concurrency; one that matches or exceeds user demand over some time period,” said Lee.

“Estimating answers to these questions is a critical success factor for many companies, and the ability to do so often goes unrecognized until it’s too late.”

Lee says that scalability is everyone’s problem. Business and technology folks have to agree on the right balance of functionality, time to market, cost, and risk tolerance.

You need the right measurable objectives, including how many users, and how many concurrent requests over those endpoints for a demand pattern.

 

4. Strive for ease of use 

– DevOps is about automation and repeatability. Dr. Andy Piper, CTO of London based Push Technology says this requires configurable virtual environments, and lots of them. “To scale, you need to automate,” he says.

Merger

“So, make sure you are using tools such as Puppet and Chef to automate the building and configuration of VMs. Similarly, make sure you have the horsepower to back this up either in-house, which is more tricky to dynamically scale, or in the cloud if your product is amenable to that.”

At the end of the day, making a product easy to install, configure and run will make the whole DevOps process much easier.

5.Manage your gateways

Susan Sparks, Director, Program Management for InfoZen’s Cloud Practice says that while the new goal is to build the best culture between development and operations teams, it is still good to keep some gates between the functions to ensure the production environments remain stable.

“Our teams are structured such that we have operations personnel included in development discussions and daily scrums so the operations teams understand what will be changing in the various future releases,” she says. “The operations team maintains responsibility for the stability of the production operation. We found that this approach has worked well for us.

We recommend using automation in both testing and operations. Our integration testing has allowed us to find issues prior to them reaching production, and our operations automation allows for cost efficiencies and better quality operations.

DevOps-Lifecycle[1]

With automation, fewer people touch the production environment, which significantly reduces human errors. This also helps with security posture, as less people have a need to touch the production environments.”

DevOps isn’t hard. What is hard is tackling the challenges that arise when an organization is not taking a DevOps approach to integration, development and deployment , and I think its very difficult to try and argue this point with me, especially in the SA ICT Industry.

By adopting a DevOps approach, and heeding these five tips, a successful DevOps environment is just an implementation or two away.

 

 

How to : From the Trenches – Use SharePoint to Implement an ALM in Your Orginisation

After my successful creation and implementation of an ALM for Business Connexion using the SharePoint Platform, I thought I’d share the lessons I have learned and show you step for step how you can implement your own ALM leveraging the power of the SharePoint Platform

slide5[1]

In this article,

  • An Overview : SharePoint Application Lifecycle Management:
  • Learn how to plan and manage Application Lifecycle Management (ALM) in Microsoft SharePoint 2010 projects by using Microsoft Visual Studio 2010 and Microsoft SharePoint Designer 2010.
  • Also learn what to consider when setting up team development environments,
  • Establishing upgrade management processes,
  • Creating a standard SharePoint development model.
  • Extending your SharePoint ALM to include other Departments like Java, Mobile, .Net and even SAP Development
Introduction to Application Lifecycle Management in SharePoint 2010

The Microsoft SharePoint 2010 development platform, which includes Microsoft SharePoint Foundation 2010 and Microsoft SharePoint Server 2010, contains many capabilities to help you develop, deploy, and update customizations and custom functionalities for your SharePoint sites. The activities that take advantage of these capabilities all fall under the category of Application Lifecycle Management (ALM).

Key considerations when establishing ALM processes include not only the development and testing practices that you use before the initial deployment of a single customization, but also the processes that you must implement to manage updates and integrate customizations and custom functionality on an existing farm.

This article discusses the capabilities and tools that you can use when implementing an ALM process on a SharePoint farm, and also specific concerns and things to consider when you create and hone your ALM process for SharePoint development.

This article assumes that each development team will develop a unique ALM process that fits its specific size and needs, so its guidance is necessarily broad. However, it also assumes that regardless of the size of your team and the specific nature of your custom solutions, you will need to address similar sets of concerns and use capabilities and tools that are common to all SharePoint developers.

The guidance in this article will help you as create a development model that exploits all the advantages of the SharePoint 2010 platform and addresses the needs of your organization.

SharePoint Application Lifecycle Management: An Overview

Although the specific details of your SharePoint 2010 ALM process will differ according the requirements of your organizations, most development teams will follow the same general set of steps. Figure 1 depicts an example ALM process for a midsize or large SharePoint 2010 deployment. Obviously, the process and required tasks depend on the project size.

Figure 1. Example ALM process
Example ALM process

The following are the specific steps in the process illustrated in Figure 1 (see corresponding callouts 1 through 10):

  1. Someone (for example, a project manager or lead developer) collects initial requirements and turns them into tasks.
  2. Developers use Microsoft Visual Studio Team Foundation Server 2010 or other tools to track the development progress and store custom source code.
  3. Because source code is stored in a centralized location, you can create automated builds for integration and unit testing purposes. You can also automate testing activities to increase the overall quality of the customizations.
  4. After custom solutions have successfully gone through acceptance testing, your development team can continue to the pre-production or quality assurance environment.
  5. The pre-production environment should resemble the production environment as much as possible. This often means that the pre-production environment has the same patch level and configurations as the production environment. The purpose of this environment is to ensure that your custom solutions will work in production.
  6. Occasionally, copy the production database to the pre-production environment, so that you can imitate the upgrade actions that will be performed in the production environment.
  7. After the customizations are verified in the pre-production environment, they are deployed either directly to production or to a production staging environment and then to production.
  8. After the customizations are deployed to production, they run against the production database.
  9. End users work in the production environment, and give feedback and ideas concerning the different functionalities. Issues and bugs are reported and tracked through established reporting and tracking processes.
  10. Feedback, bugs, and other issues in the production environment are turned into requirements, which are prioritized and turned into developer tasks. Figure 2 shows how multiple developer teams can work with and process bug reports and change requests that are received from end users of the production environment. The model in Figure 2 also shows how development teams might coordinate their solution packages. For example, the framework team and the functionality development team might follow separate versioning models that must be coordinated as they track bugs and changes.
    Figure 2. Change management involving multiple developer teams
    Change management involving multiple teams

Integrating Testing and Build Verification Environments into a SharePoint 2010 ALM Process

In larger projects, quality assurance (QA) personnel might use an additional build verification or user acceptance testing (UAT) farm to test and verify the builds in an environment that more closely resembles the production environment.

Typically, a build verification farm has multiple servers to ensure that custom solutions are deployed correctly. Figure 3 shows a potential model for relating development integration and testing environments, build verification farms, and production environments. In this particular model, the pre-production or QA farm and the production farm switch places after each release. This model minimizes any downtime that is related to maintaining the environments.

Figure 3. Model for relating development integration and testing environments
Model for relating development environments

Integrating SharePoint Designer 2010 into a SharePoint 2010 ALM Process

Another significant consideration in your ALM model is Microsoft SharePoint Designer 2010. SharePoint 2010 is an excellent platform for no-code solutions, which can be created and then deployed directly to the production environment by using SharePoint Designer 2010. These customizations are stored in the content database and are not stored in your source code repository.

General designer activities and how they interact with development activities are another consideration. Will you be creating page layouts directly within your production environment, or will you deploy them as part of your packaged solutions? There are advantages and disadvantages to both options.

Your specific ALM model depends completely on the custom solutions and the customizations that you plan to make, and on your own policies. Your ALM process does not have to be as complex as the one described in this section. However, you must establish a firm ALM model early in the process as you plan and create your development environment and before you start creating your custom solutions.

Next, we discuss specific tools and capabilities that are related to SharePoint 2010 development that you can use when considering how to create a model for SharePoint ALM that will work best for your development team.

Solution Packages and SharePoint Development Tools

One major advantage of the SharePoint 2010 development platform is that it provides the ability to save sites as solution packages. A solution package is a deployable, reusable package stored in a CAB file with a .wsp extension. You can create a solution package either by using the SharePoint 2010 user interface (UI) in the browser, SharePoint Designer 2010, or Microsoft Visual Studio 2010. In the browser and SharePoint Designer 2010 UIs, solution packages are also called templates. This flexibility enables you to create and design site structures in a browser or in SharePoint Designer 2010, and then import these customizations into Visual Studio 2010 for more development. Figure 4 shows this process.

Figure 4. Flow through the SharePoint development tools
Flow through the SharePoint development tools

When the customizations are completed, you can deploy your solution package to SharePoint for use. After modifying the existing site structure by using a browser, you can start the cycle again by saving the updated site as a solution package.

This interaction among the tools also enables you to use other tools. For example, you can design a workflow process in Microsoft Visio 2010 and then import it to SharePoint Designer 2010 and from there to Visual Studio 2010. For instructions on how to design and import a workflow process, see Create, Import, and Export SharePoint Workflows in Visio.

For more information about creating solution packages in SharePoint Designer 2010, see Save a SharePoint Site as a Template. For more information about creating solution packages in Visual Studio 2010, see Creating SharePoint Solution Packages.

Using SharePoint Designer 2010 as a Development Tool

SharePoint Designer 2010 differs from Microsoft Office SharePoint Designer 2007 in that its orientation has shifted from the page to features and functionality. The improved UI provides greater flexibility for creating and designing different functionalities. It provides rich tooling for building complete, reusable, and process-centric applications. For more information about the new capabilities and features of SharePoint Designer 2010, see Getting Started with SharePoint Designer.

You can also use SharePoint Designer 2010 to modify modular components developed with Visual Studio 2010. For example, you can create Web Parts and other controls in Visual Studio 2010, deploy them to a SharePoint farm, and then edit them in SharePoint Designer 2010.

The primary target users for SharePoint Designer 2010 are IT personnel and information workers who can use this application to create customizations in a production environment. For this reason, you must decide on an ALM model for your particular environment that defines which kinds of customizations will follow the complete ALM development process and which customizations can be done by using SharePoint Designer 2010. Developers are secondary target users. They can use SharePoint Designer 2010 as a part of their development activities, especially during initial creation of customization packages and also for rapid development and prototyping. Your ALM process must also define where and how to fit SharePoint Designer 2010 into the broader development model.

A key challenge of using SharePoint Designer 2010 is that when you use it to modify files, all of your changes are stored in the content database instead of in the file system. For example, if you customize a master page for a specific site by using SharePoint Designer 2010 and then design and deploy new branding elements inside a solution package, the changes are not available for the site that has the customized master page, because that site is using the version of the master page that is stored in the content database.

To minimize challenges such as these, SharePoint Designer 2010 contains new features that enable you to control usage of SharePoint Designer 2010 in a specific environment. You can apply these control settings at the web application level or site collection level. If you disable some action at the web application level, that setting cannot be changed at the site collection level.

SharePoint Designer 2010 makes the following settings available:

  • Allow site to be opened in SharePoint Designer 2010.
  • Allow customization of files.
  • Allow customization of master pages and layout pages.
  • Allow site collection administrators to see the site URL structure.

Because the primary purpose of SharePoint Designer 2010 is to customize content on an existing site, it does not support source code control. By default, pages that you customize by using SharePoint Designer 2010 are stored inside a versioned SharePoint library. This provides you with simple support for versioning, but not for full-featured source code control.

Ads by CeheuapMMeAd Options
Importing Solution Packages into Visual Studio 2010

When you save a site as a solution package in the browser (from the Save as Template page in Site Settings), SharePoint 2010 stores the site as a solution package (.wsp) file and places it in the Solution Gallery of that site collection. You can then download the solution package from the Solution Gallery and import it into Visual Studio 2010 by using the Import SharePoint Solution Package template, as shown in Figure 5.

Figure 5. Import SharePoint Solution Package template
Import SharePoint Solution Package template

SharePoint 2010 solution packages contain many improvements that take advantage of new capabilities that are available in its feature framework. The following list contains some of the new feature elements that can help you manage your development projects and upgrades.

  • SourceVersion for WebFeature and SiteFeature
  • WebTemplate feature element
  • PropertyBag feature element
  • $ListId:Lists
  • WorkflowAssociation feature element
  • CustomSchema attribute on ListInstance
  • Solution dependencies

After you import your project, you can start customizing it any way you like.

Note Note
Because this capability is based on the WebTemplate feature element, which is based on a corresponding site definition, the resulting solution package will contain definitions for everything within the site. For more information about creating and using web templates, see Web Templates.

Visual Studio 2010 supports source code control (as shown in Figure 6), so that you can store the source code for your customizations in a safer and more secure central location, and enable easy sharing of customizations among developers.

Figure 6. Visual Studio 2010 source code control
Visual Studio 2010 source code control

The specific way in which your developers access this source code and interact with each other depends on the structure of your team development environment. The next section of this article discusses key concerns and considerations that you should consider when you build a team development environment for SharePoint 2010.

Team Development Environment for SharePoint 2010: An Overview

As in any ALM planning process, your SharePoint 2010 planning should include the following steps:

  1. Identify and create a process for initiating projects.
  2. Identify and implement a versioning system for your source code and other deployed resources.
  3. Plan and implement version control policies.
  4. Identify and create a process for work item and defect tracking and reporting.
  5. Write documentation for your requirements and plans.
  6. Identify and create a process for automated builds and continuous integration.
  7. Standardize your development model for repeatability.

Microsoft Visual Studio Team Foundation Server 2010 (shown in Figure 7) provides a good potential platform for many of these elements of your ALM model.

Figure 7. Visual Studio 2010 Team Foundation Server
Visual Studio 2010 Team Foundation Server

When you have established your model for team development, you must choose either a collection of tools or Microsoft Visual Studio Team Foundation Server 2010 to manage your development. Microsoft Visual Studio Team Foundation Server 2010 provides direct integration into Visual Studio 2010, and it can be used to manage your development process efficiently. It provides many capabilities, but how you use it will depend on your projects.

You can use the Microsoft Visual Studio Team Foundation Server 2010 for the following activities:

  • Tracking work items and reporting the progress of your development. Microsoft Visual Studio Team Foundation Server 2010 provides tools to create and modify work items that are delivered not only from Visual Studio 2010, but also from the Visual Studio 2010 web client.
  • Storing all source code for your custom solutions.
  • Logging bugs and defects.
  • Creating, executing, and managing your testing with comprehensive testing capabilities.
  • Enabling continuous integration of your code by using the automated build capabilities.

Microsoft Visual Studio Team Foundation Server 2010 also provides a basic installation option that installs all required functionalities for source control and automated builds. These are typically the most used capabilities of Microsoft Visual Studio Team Foundation Server 2010, and this option helps you set up your development environment more easily.

Setting Up a Team Development Environment for SharePoint 2010

SharePoint 2010 must be installed on a development computer to take full advantage of its development capabilities. If you are developing only remote applications, such as solutions that use SharePoint web services, the client object model, or REST, you could potentially develop solutions on a computer where SharePoint 2010 is not installed. However, even in this case, your developers’ productivity would suffer, because they would not be able to take advantage of the full debugging experience that comes with having SharePoint 2010 installed directly on the development computer.

The design of your development environment depends on the size and needs of your development team. Your choice of operating system also has a significant impact on the overall design of your team development process. You have three main options for creating your development environments, as follows:

  1. You can run SharePoint 2010 directly on your computer’s client operating system. This option is available only when you use the 64-bit version of Windows 7, Windows Vista Service Pack 1, or Windows Vista Service Pack 2.
  2. You can use the boot to Virtual Hard Drive (VHD) option, which means that you start your laptop by using the operating system in VHD. This option is available only when you use Windows 7 as your primary operating system.
  3. You can use virtualization capabilities. If you choose this option, you have a choice of many options. But from an operational viewpoint, the option that is most likely the easiest to implement is a centralized virtualized environment that hosts each developer’s individual development environment.

The following sections take a closer look at these three options.

SharePoint 2010 on a Client Operating System

If you are using the 64-bit version of Windows 7, Windows Vista Service Pack 1, or Windows Vista Service Pack 2, you can install SharePoint Foundation 2010 or SharePoint Server 2010. For more information about installing SharePoint 2010 on supported operating systems, see Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008.

Figure 8 shows how a computer that is running a client operating system would operate within a team development environment.

Figure 8. Computer running a client operating system in a team development environment
Computer running a client operating system

A benefit of this approach is that you can take full advantage of any of your existing hardware that is running one of the targeted client operating systems. You can also take advantage of pre-existing configurations, domains, and enterprise resources that your enterprise supports. This could mean that you would require little or no additional IT support. Your developers would also face no delays (such as booting up a virtual machine or connecting to an environment remotely) in accessing their development environments.

However, if you take this approach, you must ensure that your developers have access to sufficient hardware resources. In any development environment, you should use a computer that has an x64-capable CPU, and at least 2 gigabytes (GB) of RAM to install and run SharePoint Foundation 2010; 4 GB of RAM is preferable for good performance. You should use a computer that has 6 GB to 8 GB of RAM to install and run SharePoint Server 2010.

A disadvantage of this approach is that your environments will not be centrally managed, and it will be difficult to keep all of your project-dependent environmental requirements in sync. It might also be advisable to write batch files that start and stop some of the SharePoint-related services so that when your developers are not working with SharePoint 2010, these services will not consume resources and degrade the performance of their computers.

The lack of centralized maintenance could hurt developer productivity in other ways. For example, this might be an unwieldy approach if your team is working on a large Microsoft SharePoint Online project that is developing custom solutions for multiple services (for example, the equivalents of http://intranet, http://mysite, http://teams, http://secure, http://search, http://partners, and http://www.internet.com) and deploying these solutions in multiple countries or regions.

If you are developing on a computer that is running a client operating system in a corporate domain, each development computer would have its own name (and each local domain name would be different, such as http://dev 1 or http://dev2). If each developer is implementing custom functionalities for multiple services, you must use different port numbers to differentiate each service (for example, http://dev1 for http://intranet and http://dev1:81 for http://mysite). If all of your developers are using the same Visual Studio 2010 projects, the project debugging URL must be changed manually whenever a developer takes the latest version of a project from your source code repository.

This would create a manual step that could hurt developer productivity, and it would also diminish the efficiency of any scripts that you have written for setting up development environments, because the individual environments are not standardized. Some form of centralization with virtualization is preferable for large enterprise development projects.

SharePoint 2010 on Windows 7 and Booting to Virtual Hard Drive

If you are using Windows 7, you can also create a VHD out of an existing Windows Server 2008 image on which SharePoint 2010 is installed in Windows Hyper-V, and then configure Windows 7 with BDCEdit.exe so that it boots directly to the operating system on the VHD. To learn more about this kind of configuration, see Deploy Windows on a Virtual Hard Disk with Native Boot and Boot from VHD in Win 7.

Figure 9 shows how a computer that is running Windows 7 and booting to VHD would operate within a team development environment.

Figure 9. Windows 7 and booting to VHD in a team environment
Windows 7 and booting to VHD in a team environment

An advantage of this approach is the flexibility of having multiple dedicated environments for an individual project, enabling you to isolate each development environment. Your developers will not accidentally cross-reference any artifacts within their projects, and they can create project-dependent environments.

However, this option has considerable hardware requirements, because you are using the available hardware and resources directly on your computers.

SharePoint 2010 in Centralized Virtualized Environments

In a centralized virtualized environment, you host your development environments in one centralized location, and developers access these environments through remote connections. This means that you use Windows Hyper-V in the centralized location and copy a VHD for every developer as needed. Each VHD is configured to be available from the corporate network, so that when it starts, it can be accessed by using remote connections.

Figure 10 shows how a centralized virtualized team development environment would operate.

Figure 10. Centralized virtualized team development environment
Centralized virtualized development environment

An advantage of this approach is that the hardware requirements for individual developer computers are relatively few because the actual work happens in a centralized environment. Developers could even use computers with 1 GB of RAM as their clients and then connect remotely to the centralized location. You can also manage environments easily from one centralized location, making adjustments to them whenever necessary.

Your centralized host will have significantly high hardware requirements, but developers can easily start and stop these environments. This enables you to use the hardware that you have allocated for your development environments more efficiently. Additionally, this approach provides a ready platform for more extensive testing environments for your custom code (such as multi-server farms).

After you set up your team development environment, you can start taking advantage of the deployment and upgrade capabilities that are included with the new solution packaging model in SharePoint 2010. The following sections describe how to take advantage of these new capabilities in your ALM model.

Models for Solution Lifecycle Management in SharePoint 2010

The SharePoint 2010 solution packaging model provides many useful features that will help you plan for deploying custom solutions and managing the upgrade process. You can implement assembly versioning by applying binding redirects in your web application configuration file. You can also apply versioning to your feature upgrades, and feature upgrade actions enable you to manage changes that will be necessary on your existing sites to accommodate feature upgrades. These upgrade actions can be handled declaratively or programmatically.

The feature upgrade query object model enables you to create queries in your code that look for features on your existing sites that can be upgraded. You can use this object model to obtain relevant information about all of the features and feature versions that are deployed on your SharePoint 2010 sites. In your solution manifest file, you can also configure the type of Internet Information Services (IIS) recycling to perform during a solution upgrade.

The following sections go into greater details about these capabilities and how you can use them.

Using Assembly BindingRedirect with SharePoint 2010 Assemblies

The BindingRedirect feature element can be added to your web applications configuration file. It enables you to redirect from earlier versions of installed assemblies to newer versions. Figure 11 shows how the XML configuration from the solution manifest file instructs SharePoint to add binding redirection rules to the web application configuration file. These rules forward any reference to version 1.0 of the assembly to version 2.0. This is required in your solution manifest file if you are upgrading a custom solution that uses assembly versioning and if there are existing instances of the solution and the assembly on your sites.

Figure 11. Binding redirection rules in a solution manifest file
Binding redirection rules in a solution manifest

It is a best practice to use assembly versioning, because it gives you an easy way to track the versions of a solution that are deployed to your production environments.

SharePoint 2010 Feature Versioning

The support for feature versioning in SharePoint 2010 provides many capabilities that you can use when you are upgrading features. For example, you can use the SPFeature.Version property to determine which versions of a feature are deployed on your farm, and therefore which features must be upgraded. For a code sample that demonstrates how to do this, see Version.

Feature versioning in SharePoint 2010 also enables you to define a value for the SPFeatureDependency.MinimumVersion property to handle feature dependencies. For example, you can use the MinimumVersion property to ensure that a particular version of a dependent feature is activated. Feature dependencies can be added or removed in each new version of a feature.

The SharePoint 2010 feature framework has also enhanced the object model level to support feature versioning more easily. You can use the QueryFeatures method to retrieve a list of features, and you can specify both feature version and whether a feature requires an upgrade. The QueryFeatures method returns an instance of SPFeatureQueryResultCollection, which you can use to access all of the features that must be updated. This method is available from multiple scopes, because it is available from the SPWebService, SPWebApplication, SPContentDatabase, and SPSite classes. For more information about this overloaded method, see QueryFeatures(), QueryFeatures(), QueryFeatures(), and QueryFeatures(). For an overview of the feature upgrade object model, see Feature Upgrade Object Model.

The following section summarizes many of the new upgrade actions that you can apply when you are upgrading from one version of a feature to another.

SharePoint 2010 Feature Upgrade Actions

Upgrade actions are defined in the Feature.xml file. The SPFeatureReceiver class contains a FeatureUpgrading method, which you can use to define actions to perform during an upgrade. This method is called during feature upgrade when the feature’s Feature.xml file contains one or more <CustomUpgradeAction> tags, as shown in the following example.

<UpgradeActions>
  <CustomUpgradeAction Name="text">
    ...
  </CustomUpgradeAction>
</UpgradeActions>

Each custom upgrade action has a name, which can be used to differentiate the code that must be executed in the feature receiver. As shown in following example, you can parameterize custom action instances.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <!-- First action-->
      <CustomUpgradeAction Name="example">
        <Parameters>
          <Parameter Name="parameter1">Whatever</Parameter>
          <Parameter Name="anotherparameter">Something meaningful</Parameter>
          <Parameter Name="thirdparameter">additional configurations</Parameter>
        </Parameters>
      </CustomUpgradeAction>
      <!-- Second action-->
      <CustomUpgradeAction Name="SecondAction">
        <Parameters>
          <Parameter Name="SomeParameter1">Value</Parameter>
          <Parameter Name="SomeParameter2">Value2</Parameter>
          <Parameter Name="SomeParameter3">Value3</Parameter>
        </Parameters>
      </CustomUpgradeAction>
    </VersionRange>
  </UpgradeActions>
</Feature>

This example contains two CustomUpgradeAction elements, one named example and the other named SecondAction. Both elements have different parameters, which are dependent on the code that you wrote for the FeatureUpgrading event receiver. The following example shows how you can use these upgrade actions and their parameters in your code.

 <summary>
 Called when feature instance is upgraded for each of the custom upgrade actions in the Feature.xml file.
 </summary>
 <param name="properties">Feature receiver properties</param>
 <param name="upgradeActionName">Upgrade action name</param>
 <param name="parameters">Custom upgrade action parameters</param>

public override  FeatureUpgrading(SPFeatureReceiverProperties properties, 
                                        string upgradeActionName, 
                                        System.Collections.Generic.IDictionary<string, string> parameters)
{

    // Do not do anything, if feature scope is not correct.
     (properties.Feature.Parent  SPWeb)
    {

        // Log that feature scope is incorrect.
        return;
    }

    switch (upgradeActionName)
    {
         "example":
            FeatureUpgradeManager.UpgradeAction1(parameters["parameter1"], parameters["AnotherParameter"],
                                                 parameters["ThirdParameter"]);
            break;
         "SecondAction":
            FeatureUpgradeManager.UpgradeAction1(parameters["SomeParameter1"], parameters["SomeParameter2"],
                                                 parameters["SomeParameter3"]);
            break;
        default:

            // Log that code for action does not exist.
            break;
    }
}

You can have as many upgrade actions as you want, and you can apply them to version ranges. The following example shows how you can apply upgrade actions to version ranges of a feature.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange BeginVersion="1.0.0.0" EndVersion ="2.0.0.0">
      ...
    </VersionRange>
    <VersionRange BeginVersion="2.0.0.1" EndVersion="3.0.0.0">
      ...
    </VersionRange>
    <VersionRange BeginVersion="3.0.0.1" EndVersion="4.0.0.0">
      ...
    </VersionRange>
  </UpgradeActions>
</Feature>

The AddContentTypeField upgrade action can be used to define additional fields for an existing content type. It also provides the option of pushing these changes down to child instances, which is often the preferred behavior. When you initially deploy a content type to a site collection, a definition for it is created at the site collection level. If that content type is used in any subsite or list, a child instance of the content type is created. To ensure that every instance of the specific content type is updated, you must set the PushDown attribute to , as shown in the following example.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <AddContentTypeField ContentTypeId="0x0101002b0e208ace0a4b7e83e706b19f32cab9"
                           FieldId="{ccbcd479-94c9-4f3a-95c4-58897da434fe}"
                           PushDown="True"/>
    </VersionRange>
  </UpgradeActions>
</Feature>

For more information about working with content types programmatically, see Introduction to Content Types.

The ApplyElementManifests upgrade action can be used to apply new artifacts to a SharePoint 2010 site without reactivating features. Just as you can add new elements to any new SharePoint elements.xml file, you can instruct SharePoint to apply content from a specific elements file to sites where a given feature is activated.

You can use this upgrade action if you are upgrading an existing feature whose FeatureActivating event receiver performs actions that you do not want to execute again on sites where the feature is deployed. The following example demonstrates how to include this upgrade action in a Feature.xml file.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <ApplyElementManifests>
        <ElementManifest Location="AdditionalV2Fields\Elements.xml"/>
      </ApplyElementManifests>
    </VersionRange>
  </UpgradeActions>
</Feature>

An example of a use case for this upgrade action involves adding new .webpart files to a feature in a site collection. You can use the ApplyElementManifest upgrade action to add those files without reactivating the feature. Another example would involve page layouts, which contain initial Web Part instances that are defined in the file element structure of the feature element file. If you reactivate this feature, you will get duplicates of these Web Parts on each of the page layouts. In this case, you can use the ElementManifest element of the ApplyElementManifests upgrade action to add new page layouts to a site collection that uses the feature without reactivating the feature.

The MapFile element enables you to map a URL request to an alternative URL. The following example demonstrates how to include this upgrade action in a Feature.xml file.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <MapFile FromPath="Features\MapPathDemo_MapPathDemo\PageDeployment\MyExamplePage.aspx"
             ToPath="Features\MapPathDemo_MapPathDemo\PageDeployment\MyExamplePage2.aspx" />
  </UpgradeActions>
</Feature>

Mapping URLs in this way would be useful to you in a case where you have to deploy a new version of a page that was customized by using SharePoint Designer 2010. The resulting customized page would be served from the content database. When you deploy the new version of the page, the new version will not appear because content for that page is coming from the database and not from the file system. You could work around this problem by using the MapFile element to redirect requests for the old version of the page to the newer version.

It is important to understand that the FeatureUpgrading method is called for each feature instance that will be updated. If you have 10 sites in your site collection and you update a web-scoped feature, the feature receiver will be called 10 times for each site context. For more information about how to use these new declarative feature elements, see Feature.xml Changes.

Upgrading SharePoint 2010 Features: A High-Level Walkthrough

This section describes at a high level how you can put these feature-versioning and upgrading capabilities to work. When you create a new version of a feature that is already deployed on a large SharePoint 2010 farm, you must consider two different scenarios: what happens when the feature is activated on a new site and what happens on sites where the feature already exists. When you add new content to the feature, you must first update all of the existing definitions and include instructions for upgrading the feature where it is already deployed.

For example, perhaps you have developed a content type to which you must add a custom site column named City. You do this in the following way:

  1. Add a new element file to the feature. This element file defines the new site column and modifies the Feature.xml file to include the element file.
  2. Update the existing definition of the content type in the existing feature element file. This update will apply to all sites where the feature is newly deployed and activated.
  3. Define the required upgrade actions for the existing sites. In this case, you must ensure that the newly added element file for the additional site column is deployed and that the new site column is associated with the existing content types. To achieve these two objectives, you add the ApplyElementManifests and the AddContentTypeField upgrade actions to your Feature.xml file.

When you deploy the new version of the feature to existing sites and upgrade it, the upgrade actions are applied to sites one by one. If you have defined custom upgrade actions, the FeatureUpgrading method will be called as many times as there are instances of the feature activated in your site collection or farm.

Figure 12 shows how the different components of this scenario work together when you perform the upgrade.

Figure 12. Components of a feature upgrade that adds a new element to an existing feature
Add a new element to an existing feature

Different sites might have different versions of a feature deployed on them. In this case, you can create version ranges, which define specific actions to perform when you are upgrading from one version to another. If a version range is not defined, all upgrade actions will be applied during each upgrade.

Figure 13 shows how different upgrade actions can be applied to version ranges.

Figure 13. Applying different upgrade actions to version ranges.
Applying upgrade actions to version ranges

In this example, if a given site is upgrading directly from version 1.0 to version 3.0, all configurations will be applied because you have defined specific actions for upgrading from version 1.0 to version 2.0 and from 2.0 to version 3.0. You have also defined actions that will be applied regardless of feature version.

Code Design Guidelines for Upgrading SharePoint 2010 Features

To provide more flexibility for your code, you should not place your upgrade code directly inside the FeatureUpgrading event receiver. Instead, put the code in some centralized location and refer to it inside the event receiver, as shown in Figure 14.

Figure 14. Centralized feature upgrade manager
Centralized feature upgrade manager

By placing your upgrade code inside a centralized utility class, you increase both the reusability and the testability of your code, because you can perform the same actions in multiple locations. You should also try to design your custom upgrade actions as generically as possible, using parameters to make them applicable to specific upgrade scenarios.

Solution Lifecycles: Upgrading SharePoint 2010 Solutions

If you are upgrading a farm (full-trust) solution, you must first deploy the new version of your solution package to a farm.

Execute either of the following scripts from a command prompt to deploy updates to a SharePoint farm. The first example uses the Stsadm.exe command-line tool.

stsadm -o upgradesolution -name solution.wsp -filename solution.wsp

The second example uses the Update-SPSolution Windows PowerShell cmdlet.

UpdateSPSolution Identity contoso_solution.wsp LiteralPath c:\contoso_solution_v2.wsp GACDeployment

After the new version is deployed, you can perform the actual upgrade, which executes the upgrade actions that you defined in your Feature.xml files.

A farm solution upgrade can be performed either farm-wide or at a more granular level by using the object model. A farm-wide upgrade is performed by using the Psconfig command-line tool, as shown in the following example.

psconfig -cmd upgrade -inplace b2b
NoteNote
This tool causes a service break on the existing sites. During the upgrade, all feature instances throughout the farm for which newer versions are available will be upgraded.

You can also perform upgrades for individual features at the site level by using the Upgrade method of the SPFeature class. This method causes no service break on your farm, but you are responsible for managing the version upgrade from your code. For a code example that demonstrates how to use this method, see SPFeature.Upgrade.

Upgrading a sandboxed solution at the site collection level is much more straightforward. Just upload the SharePoint solution package (.wsp file) that contains the upgraded features. If you have a previous version of a sandboxed solution in your solution gallery and you upload a newer version, an Upgrade option appears in the UI, as shown in Figure 15.

Figure 15. Upgrading a sandboxed solution
Upgrading a sandboxed solution

After you select the Upgrade option and the upgrade starts, all features in the sandboxed solution are upgraded.

Conclusion

This article has discussed some considerations and examples of Application Lifecycle Management (ALM) design that are specific to SharePoint 2010, and it has also enumerated and described the most important capabilities and tools that you can integrate into the ALM processes that you choose to establish in your enterprise. The SharePoint 2010 feature framework and solution packaging model provide flexibility and power that you can put to work in your ALM processes.

How To : Automate a Test Case in Microsoft Test Manager & Setup Automated Build-Deploy-Test Workflows

To automate a test case, link it to a coded test method. You can link any unit test, coded UI test, or generic test to a test case. You’ll want to link a test method that performs the test described by the test case. Typically these are integration tests.

The results of automated and manual tests appear together. If the test cases are linked to backlog items, stories, or other requirements, you can review the test results by requirement.

You can make links one at a time, or you can generate test cases from an assembly of test classes.

  1. Using Visual Studio, create or choose a test method. It can be an ordinary test method, a coded UI test, ordered test, or a generic test method.

    Check the method into Team Foundation Server.

    Keep the solution open in Visual Studio.

  2. Open the test case in Visual Studio.
    Open Test Case Using Microsoft Visual Studio
  3. Associate the test method with your test case.
    Associate Automation With Test Case

    If you want to change or delete the association later, choose Remove Association.

We don’t recommend linking load tests or web tests to test cases.

  1. Open a Developer Command Prompt, and change directory to the output director of your Visual Studio solution.

    cd MySolution\MyProject\bin\Debug

  2. To import all the test methods from the solution:

    tcm testcase /collection: CollectionUrl /teamproject:MyProject /import /storage:MyAssembly.dll /category:”MyIntegrationTestCategory

    The category parameter is optional but recommended. You only want to create test cases from integration or system tests, which you can mark by using the [TestCategory (“category”)] attribute.

  3. In the Test hub in Team Web Access or in Microsoft Test Manager, use Add Existing to add the test cases to a test suite.
Ads by CeheuapMMeAd Options

Provide the build location so that the test method can be found.

  1. In Microsoft Test Manager, choose Testing Center, , Properties.
  2. Under Builds, set Filter for builds. You can set the build definition and quality attribute of the builds you want to choose from.
  3. Choose Modify to assign a build to the test plan. You can compare your current build with a build you plan to take. The associated items list shows the changes to work items between the builds. You can then assign the latest build to take and use for testing with this plan. For more information, see What development has been done since a previous build?.
I’m not using Team Foundation Build to build my application and tests. How can I run automated lab tests?
Create a build definition that contains just the location where your assemblies are shared. Then create a fake instance of this build from the developer command prompt:

TfsCreateBuild.exe /collection:http://tfsservername:8080/tfs/collectionname /project: projectname /builddefinition:”MyBuildDefinition” /buildnumber:”FakeBuild_1.0″

Specify the build definition in your test plan.

To run your automated tests tests using Microsoft Test Manager, you must use a lab environment. It must have roles for each of the client and server machines used in your tests. (If you’ve used lab environments for manual tests, notice that automated tests must have a machine for the client role.)

  1. Create or choose either a standard lab environment or an SCVMM lab environment.

    If you create a new environment, choose a machine for each role.

    The machines tab in the new environment wizard.

    If you’re planning to run coded UI tests, configure it on the Advanced page of the wizard. This sets the test agent to run as a user. You have to supply a user name under which the agent will run.

    We recommend that you use a different user account than the lab service account used by the test controller.

    The advanced tab in the new environment wizard.
  2. Set the test plan to use your environment for automated tests.
    Automation on test plan properties
  3. If you want to collect more than the basic diagnostic data from the test machines, create a test settings file.
    New test settings

    In the test settings wizard, choose the data you want to collect for each machine.

    Select diagnostics for each machine role

Start automated tests the same way you do manual tests.

In Microsoft Test Manager, choose Testing Center, . Then select a test suite or an individual test and choose .

If you want to run a test in a different environment or with different test settings, choose Run with Options.

If you want to run an automated test manually, choose Run with Options.

If you have multiple build configurations, the test assemblies to run the automated tests are searched for recursively from the root directory of the build drop folder. If it is important which assemblies are selected when you run your automated tests, you should use Run with options to specify the build configuration.

Ads by CeheuapMMeAd Options
  1. In Microsoft Test Manager, choose Testing Center, , Analyze Test Runs.
  2. Double-click a test run to open it and view the details. You can:
    • Update the title of the test run to reflect the outcome.
    • Choose Resolution to indicate a reason, if the test failed.
    • Add comments.
    • View the details of an individual test.
    • Create a bug.
Q: Can I generate the test method from a manual run of the test case?
Yes. Verifying Code by Using UI Automation (Various Blog Post can be found on my Blog about Coded UI Test Automation)

Q: I want my automated test to repeat with different data. Do I use the same test parameters that the manual version of the test case uses?
To make the automated test iterate over different data, write that into the code of the test method.

Test parameters are only used with the manual version of the test. They aren’t visible to the automated test code.

Automated build-deploy-test workflows

You can use a build-deploy-test on Team Foundation Server to deploy and test your application when you run a build. This lets you schedule and run the build, deployment, and testing of your application with one build process. Build-deploy-test work with Lab Management to deploy your applications to a lab environment and run tests on them as part of the build process.

If your lab environment is an SCVMM environment, you can also use workflows to create and restore snapshots that automatically create a clean environment before you run tests, and to the state of your environment when a test fails. This ensures that each test isn’t influenced by changes to the lab environment from previous test runs. In addition, it ensures that testers can accurately reproduce that state of a lab environment when they reproduce bugs.

Requirements

  • Visual Studio Ultimate, Visual Studio Premium, Visual Studio Test Professional

You can use a build-deploy-test in the following scenarios:

Tip Tip
Build, or Build and Test: If you are building your application in a drop folder without deploying it to a lab environment, then you can use the default build process template. For more information, see Use the Default Template for your build process. If you also want to test your application without deploying it, see Run tests in your build process
  • Build, Deploy, and Test − Build your application, then deploy it and run tests on it in a lab environment. This workflow enables you to run a series of tests from a test plan, on a deployed application, as part of your build process. This scenario is common when running build verification tests.
  • Deploy and Test − This scenario is similar to the “build, deploy, and test” scenario, except a new build isn’t created during the workflow. Instead, the workflow uses an existing build from a drop folder.
  • Deploy Only – Deploy an existing build from a drop folder to a lab environment without running automated tests during the workflow. Once a build has passed your build verification tests, and is ready to be sent to a test team, you might want to send that build to the test team so they can run additional tests that aren’t part of your workflow. This scenario is common when running manual tests.
  • Build and Deploy – This scenario is similar to the “deploy only” scenario, except a new build is created during the workflow.

A build-deploy-test workflow is a Windows Workflow file that defines how a build definition will run a build, deploy an application, and run tests. A build-deploy-test workflow is created in a build definition by choosing a build process template called the lab default template (LabDefaultTemplate.11.xaml), and configuring the settings.

You can also create a customized build process template for your workflow depending on your requirements. You configure your build definition after you set up your build machine, test machines, and lab environments.

The deployment settings in a -deploy-test workflow define how an application is deployed by specifying the deployment scripts to run on in your lab environment. You can specify a lab management role to run each deployment script on, or you can specify a machine in your lab environment.

Creating deployment scripts is a major part of setting up -deploy-test workflows. Deployment scripts copy files from your build to your lab environment, and then run your installation packages.

The following diagram describes how a build is deployed by a build-deploy-test workflow:

Dataflow for deployment scripts.

The following steps are displayed in the diagram above.

  1. The build-deploy-test workflow starts a build, and then gets the deployment scripts.
  2. The build definition copies the build files to the drop location.
  3. The workflow runs each deployment script in the working directory of the specific machine or machine role that the script is assigned to.
  4. Each deployment script retrieves build files from the drop location.
  5. Each deployment script copies or installs the specified build files onto machines in the lab environment.

A Look At : DevOps and DNS – What Every Developer Should Know

Over the years I have had the to work alongside many really smart, switched on people in the development community. I’ve learnt from them many intermediate and experienced programming skills. Generally when it comes understanding the very basis of how the internet functions using DNS, most of these very same experienced developers haven’t got a clue.

I wrote this post to hopefully help pay back some of the awesome karma they  have earned helping me over the years, by teaching them something in return. Lets learn about DNS.

imageDNS is a huge part of the inner workings of the internet. spend a considerable amount of man hours a year ensuring the sites they build are fast and respond well to user interaction by setting up expensive CDN’s, recompressing images, minifying script files and much more – but what a lot of us don’t understand is that DNS server configuration can make a big difference to the speed of your site – hopefully at the end of this post you’ll feel empowered to get the most out of this part of your website’s configuration.

What I will in this post:

Why does DNS matter to you?

Well it’s simple – if you are a developer it matters to you because:

  • You own , and up until now your has taken care of your DNS for you – but you need to know what’s going on in case something bad happens…
  • Maybe the you have allows you to manage your own DNS using a web interface, but you haven’t a clue what you are doing.
  • The DNS that your webhost or ISP offers you is probably not the fastest – if your website grows over time, you probably want to setup your own DNS or manage it through a dedicated service such as DNSMadeEasy, ZoneEdit or DynDNS.

First up: How the internet works (DNS)

If you already know how this works feel to step ahead.

image

In really simple terms, when you enter a URL and hit enter, apart from magical unicorns rendering the requested page in your browser window, the interwebs works kinda like this:

  • You want to visit to a domain name, so your PC first checks its internal DNS cache to see if it’s looked it up recently – if so it uses this record
  • Your PC then asks your DNS server (probably configured by your router or ISP when you first started your PC) for the IP address of the server hosting the domain name you want to visit.
  • Your ISP’s DNS server looks up the root DNS servers for the world to find out who takes care of the DNS configuration for the domain you want to visit.
  • Your ISP’s DNS server then asks this authorative DNS server for the domain name you want’s IP information, fetches it, caches it, and then returns it to your PC.
  • Your browser connects to this IP address and asks for a web page.

There are a number of different scenarios that play a role in special circumstances with the above but I’m not really going to cover everything in this post.

What DNS does do:

  • Converts hostnames to IP addresses.
  • Stores mail delivery information for a domain.
  • Stores miscellaneous information against a domain name (TXT records).

What DNS doesn’t/cannot do

  • Redirects users to a different server/site.
  • Configure which port the client is connecting to (not entirely true; SRV records are used for protocol/port mappings for services).

Tools for the Job

One of the coolest things about the tools you’ll need for this blog post, is where I tell you that independent of which operating system you are using, you almost certainly have everything you need to query and test the DNS configuration of your website installed right now without you even knowing

The Swiss Army Knife of DNS inspection is the command line tool NSLOOKUP. This is installed by default in nearly every OS you’ll ever need it on.

NSLOOKUP on Windows

NSLOOKUP on Unix/Linux/Mac OSX

Another cool thing the usage is the same on most platforms as well.

To run NSLOOKUP simply open a terminal/command prompt and type

nslookup

image

The first thing you’ll notice about the pic above is that the first thing NSLOOKUP tells me upon launch is the current DNS server that it will use for its lookups.

By default NSLOOKUP will use your current machine’s DNS settings for its DNS lookups. This can sometimes give you different results from the rest of the world as your internal DNS at your place of work/ISP may be returning different results so they can route, say your office mail, to the internal mail server IP rather than the external internet/DMZ IP address.

Lets change this to use Google’s global DNS server to get a better global view (what others see when they surf the web outside my network) on our DNS queries, by typing;

server 8.8.8.8

Now if I query this blog’s domain name “diaryofaninja.com.” (ensure you place the additional period on the end of this query to avoid any internal DNS suffixes to be added) I should get back the A record for my domain; (A records are the default query type used by NSLOOKUP – I discuss DNS record types further in this post below).

image

An overview of common DNS record types

Below is a simple overview of all the common types of DNS records and some example scenarios.

All records usually share the following common properties:

Value – this is usually the contents of the records. If it is an A record this is the IP address for that A record

TTL – this is the “Time To Live” in seconds for a DNS record and basically means that DNS Clients of Servers accessing the requested record should not cache the record any longer than this value. If this value is set to 3600 this means to cache the returned record’s value for an hour (these values are usually the reason that IT people talk about DNS changes taking “24-48 hours” as these values are usually set quite high on hostnames that are quite static so that they offer the best performance by being kept in cache.

SOA (Start of Authority) records

SOA records (start of authority records) are the root of your domain’s registration. SOA records are created by your domain name registrar in the parent domain’s DNS servers (in the case of a .com domain the SOA record is created in the DNS servers for the .com root domain. In an SOA record the hostnames or IP addresses of your domain’s DNS servers are stored. These tell the internet’s root DNS servers (mother ship DNS servers) where to ask for the rest of your domain’s DNS configuration (such as A, MX and TXT records). When a client (a web browser, a mail server, an FTP client etc.) wants to connect to part of your website, it asks the locally configured DNS server for the record –  the server in turn looks for the SOA records for your domain so it knows which DNS server to ask about it.

Consider these records as the source of “which DNS server stores all the information about the website I want to look up”.

Hostname (A and CNAME) records

A records store information about a hostname record for your domain name. These list the IP address that a client should talk to when using a certain hostname.

If you had an address of http://mywebsite.examplecompany.com into your web browser this would refer to the A record “mywebsite” on the domain “examplecompany.com”.

If you have multiple A records with the same hostname, clients will receive a list of all the records. The order of this list will change with iterate each time you query the DNS server – this is called round-robin DNS and it a simple way to spread load across multiple servers .

AAAA records are the same as A records, only they stores the 128bit IPv6 address of a server instead of the IPv4 IP address – as the world shifts to using IPv6 these records will gain more relevance, but if your webhost supports IPv6 its worth setting these records up now, so that any visitors using IPv6 can access your website.

A CNAME record (Canonical name) is basically  an alias for an A record. This tells whoever is asking, that the DNS information for the requested hostname is stored in another record somewhere else on the internet. This other record might not even be on the same domain name or on the same DNS server. CNAME’s are very powerful as they allow you simplify your domains DNS records by centralising the information somewhere else. ISPs and webhosts commonly use CNAMEs to centralise the DNS configuration storage for things like mail or web server’s by allowing you to keep all the configuration details on a parent domain name.

It is important to note that root records for a domain name (I.e the empty A record for mydomain.com) cannot be a CNAME. The simple hard and fast reason why, is that CNAME’s cannot live on the same node in a DNS forest as any other type of record – because the very nature of a CNAME record defines that all configuration for that node is stored somewhere else, and given you store other information at the root of your domain other than your A record (MX records for mail etc.) this would break every other record’s functionality. This is mentioned specifically in the RFC for DNS, section 3.6.2.

An example of CNAME usage, is when most webhosting company web servers have a hostname such as web0234.mywebhost.com

When setting up your website, your webhost might for instance make the “http://www.yourwebsite.com record for your website a CNAME that has the value web0234.mywebhost.com so that when trying to access “http://www.yourwebsite.com” DNS clients look up the IP address for “web0234.mywebhost.com”. This makes their life easier if the IP address for this web server changes, as they only have to update a single DNS record, instead of updating all their clients DNS records.

To reiterate this to make it crystal clear:
CNAME’s are not a redirection. They are a reference pointer for a hostname. All they tell DNS clients, is that the configuration information for the hostname being queried is the same as can be found by querying the other hostname.

Illustration – Visiting a website

In the case that you want to visit http://www.google.com your computer does the following:

  1. Using the local machine’s DNS client your operating system talks to the locally configured DNS server for your local network/ISP’s network.
  2. This DNS server inturn looks up the DNS server for google.com by first looking up the SOA record for google.com and then connecting to the DNS server listed.
  3. Your local DNS server then asks the DNS server for google.com for the A record listed for www – the google.com DNS server will return an IP address for http://www.google.com. Your ISP or local network’s DNS server, along with returning it to you, will then cache this record for as long as the TTL (time to live) property of the record says.
  4. Your browser then connects to this returned IP address listed on port 80 and asks for the web page.

All of the above happens in milliseconds – but you can understand that if the google.com DNS server is slow in responding this negatively affects your browsing experience.

A records and CNAME records have a TTL (Time To Live) property to indicate how long they can be cached for.

Mail (MX)

MX records are the internets way of telling mail where to be delivered. They list the hostname or IP address of the mail server that handles mail for a given domain name. If a mail server is looking to deliver mail to “examplecompany.com” it will look up the MX record for this domain.

MX records have both a TTL (Time To Live) and a Priority (a weighting to give the order in which they should be looked up).

Illustration – Sending an e-mail to a friend

In the case that you send an email to your friend at myfriend@otherexamplecompany.com your local SMTP mail server (usually at your ISP) does the following:

  1. Your mail server connects to its local network/ISP’s DNS server and asks for the MX record for otherexamplecompany.com.
  2. Your local DNS server or ISP’s DNS server looks up the SOA record for otherexamplecompany.com and then connects to the DNS server listed.
  3. It asks for the MX records for this domain and is returned a list of hostnames.
  4. it grabs the first hostname from the list (order in ascending order by Priority), runs a second query for the IP address of this mail server and returns this IP address to your mail server.
  5. your mail server then connects to this IP address on the SMTP TCP port 25 and delivers your mail.

Text Records (TXT)

TXT records are a powerful addition to the DNS standard that allow the storage of miscellaneous information for a hostname. Many web developers, system admins and the like use TXT records for the storage of information such as SPF records and DKIM public keys.

TXT records have a TTL (Time To Live) property to indicate how long they can be cached for.

Name Server Records (NS)

Name Server Records are placed in your domain’s DNS when you wish to store the configuration of part of your domain’s DNS on a separate DNS server. This can be very handy if you want to give control of a subdomain to another person/entity.

i.e. my site is http://www.widgetsareus.com and I manage all of the DNS for this domain, but I would like support.widgetsareus.com and any child sub domains of this domain to be managed by the company we outsource all of our customer support to – therefore I have setup an NS record for support.widgetsareus.com to point at our support partner’s DNS servers.

Setting up a domain from scratch

If you are setting up a domain you’ve just purchased from scratch you’ll need to do the following:

Setup your website (A records)

  1. Setup a DNS server to store the configuration for yourdomain.com
    This might be at your webhost, or might be a third party service such as DNSMadeEasy, ZoneEdit or DynDNS.
  2. Set the Nameserver SOA records for your domain name to the above DNS server’s IP address or hostname (at your domain registrar)
  3. Create a new root record to point at your webserver’s IP address (this is simply an A record with an empty hostname) in your domain name’s DNS forest.
  4. Create a new www A record that points at your webserver’s IP address in your domain’s DNS server
  5. Setup your webserver’s website to listen for the host-header of your domain name (IIS calls this a “binding”).
  6. Test your DNS as below.
  7. Try and access your site in a web browser.

Testing your website’s A record

In a command prompt/terminal type NSLOOKUP

Enter “yourdomainname.com.” (including the extra period on the end) and hit enter

Check that the returned record value/IP address is that of your web server.

image

Remember to do the same for “www.youdomainname.com.” if you also use www. in your domain name.

Setup your website’s mail (MX record)

  1. Setup a DNS server to store the configuration for yourdomain.com  (Follow steps 1 and 2 above from your website if you haven’t already).
  2. Create a new MX record that points at your mail servers IP address or hostname.
  3. Setup your mail server to listen to receive mail for yourdomain.com
  4. Test that all the above is setup correctly using nslookup as per below.
  5. Try and send and receive email to and from your domain name.
  6. Setup SPF records, to verify your mail server’s ability to send mail on behalf of your domain name

Testing your website’s MX record

In a command prompt/terminal type NSLOOKUP

Enter “set type=mx” and hit enter. This set the query type to MX records.

Enter “yourdomainname.com.” (including the extra period on the end) and hit enter

Check that the returned record value/IP address(es) is that of your mail server.

image

Investigating Common Problems

How do I check what DNS server is authorative for my domain name?

You’ve set up your websites DNS, everything is fine; then one day, everyone visiting your site is directed to a site that isn’t yours!

To check which DNS server is authorative for your domain name, first open a command prompt or terminal.

Type “NSLOOKUP” and hit enter

Type ”set type=ns” and hit enter. This sets the query type to NS (NameServer) records.

Type “yourdomainname.com.” and hit enter (make sure you put the extra dot on the end.)

Confirm that the nameserver’s returned are yours.

image

How do I check what IP address my site is currently pointing at?

In a command prompt/terminal launch NSLOOKUP

Enter “yourdomainname.com.” (including the extra period on the end) and hit enter

Check that the returned record value/IP address is that of your web server.

image

Remember to do the same for “www.youdomainname.com.” if you also use www. in your domain name.

What is split DNS?

Split DNS is when you run a separate DNS forest for a domain name both on your external DNS servers (for everyone else to see) and also internally for staff or local users to see.

This allows you to do things like:

  • Ensure local users talk to your mail server (or any other internal server) using the internal IP address, and internet users talk to your mail server’s external DMZ IP address.
  • Block access to certain sites by giving incorrect or different DNS results for these site’s domain names. This if often how many net nanny etc softwares work.

For some users my sites seems to be served from a different address – how do I check “what the world sees” vs. “what I see”?

Many things can occur that result in some people seeing different DNS results to others:

  • Your ISP/company’s DNS server may have an older cached record to the current live record
  • Your local computer may be caching the DNS record you are requesting
  • Your local DNS server may be fetching the records for your domain from a different authorative DNS server than the rest of the world.

How do you investigate these things?

The easiest way to investigate these things is to query an external DNS server that you know is good for the records you want, to get a better idea of how the rest of the world sees things.

A really good server that is easy to remember are the ones owned by Google. The primary and secondary DNS server for Google’s Public DNS system are “8.8.8.8” and “8.8.4.4” respectively.

You can use whatever DNS servers you think are more likely to see the correct values.

To do this, open a command prompt/console.

Type “NSLOOKUP” and hit enter

Type “server 8.8.8.8” and hit enter. This sets the DNS server we will query to the Google Public DNS server’s address.

Type “yourdomainname.com.” and check the resulting record values.

image

CRM Bulk Export Tool Available for CRM 4.0 and CRM Online!!

CRM 4.0 Bulk Data Export Tool

There is no facility to Bulk Export the data from Microsoft Dynamics CRM 4.0. This sample tool  allows users to connect to OnPremise or Online Microsoft CRM 4.0 organization and export data for CRM entities in form of CSV files.

Once you have installed the tool, launch CrmDataExport.exe, select CRM configuration and specify the credentials:

If you are connecting to OnPremise CRM Organization, make sure to open Internet Explorer, connect to CRM server and Save Password. This is necessary as this tool uses stored credentials to connect to the CRM server in OnPremise configuration.

Once you are connected successfully, you can select the entities for which you want to export the records, specify output directory, data and field delimiters, and duration. Note that All Records option is not available for Online configuration. Click Export button to export the records.

The tool creates CSV for each selected entities in the directory selected.

For this and other Web Parts, Templates, Apps, Toolkits,etc for MS CRM, SharePoint, Office 365, contact me at tomas.floyd@outlook.com

crmexporttool

How To : Convert Word Documents to PDF using SharePoint Server 2010 and Word Services and Stop Wasting Money on MS Field Engineers

SharePoint’s Word Automation Services is extremely powerful when it comes to converting document types and keeping the formatting.
Combine this with the flexibility of the templates that OpenXML use and you have a very powerful combination capapble of ANYTHING.
This is part of a Document Management System I developed for Nedbank that a Microsoft Field Engineer said was Impossible in SharePoint and asked R300 000 just for a few hours of his time for “analysis

SharePoint 2010Word Automation Services available with SharePoint Server 2010 supports converting Word documents to other formats. This includes PDF. This article describes using a document library list item event receiver to call Word Automation Services to convert Word documents to PDF when they are added to the list. The event receiver checks whether the list item added is a Word document. If so, it creates a conversion job to create a PDF version of the Word document and pushes the conversion job to the Word Automation Services conversion job queue.

This article describes the following steps to show how to call the Word Automation Services to convert a document:

  1. Creating a SharePoint 2010 list definition application solution in Visual Studio 2010.
  2. Adding a reference to the Microsoft.Office.Word.Server assembly.
  3. Adding an event receiver.
  4. Adding the sample code to the solution.

Creating a SharePoint 2010 List Definition Application in Visual Studio 2010

This article uses a SharePoint 2010 list definition application for the sample code.

To create a SharePoint 2010 list definition application in Visual Studio 2010

  1. Start Microsoft Visual Studio 2010 as an administrator.
  2. From the File Menu, point to the Project menu and then click New.
  3. In the New Project dialog box select the Visual C# SharePoint 2010 template type in the Project Templates pane.
  4. Select List Definition in the Templates pane.
  5. Name the project and solution ConvertWordToPDF.
    Figure 1. Creating the Solution

    Creating the solution

  6. To create the solution, click OK.
  7. Select a site to use for debugging and deployment.
  8. Select the site to use for debugging and the trust level for the SharePoint solution.
    Note
    Make sure to select the trust level Deploy as a farm solution. If you deploy as a sandboxed solution, it does not work because the solution uses the Microsoft.Office.Word.Server assembly. This assembly does not allow for calls from partially trusted callers.
    Figure 2. Selecting the trust level

    Creating the solution

  9. To finish creating the solution, click Finish.

Adding a Reference to the Microsoft Office Word Server Assembly

To use Word Automation Services, you must add a reference to the Microsoft.Office.Word.Server to the solution.

To add a reference to the Microsoft Office Word Server Assembly

  1. In Visual Studio, from the Project menu, select Add Reference.
  2. Locate the assembly. By using the Browse tab, locate the assembly. The Microsoft.Office.Word.Server assembly is located in the SharePoint 2010 ISAPI folder. This is usually located at C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI. After the assembly is located, click OK to add the reference.
    Figure 3. Adding the Reference

    Adding the reference

Adding an Event Receiver

This article uses an event receiver that uses the Microsoft.Office.Word.Server assembly to create document conversion jobs and add them to the Word Automation Services conversion job queue.

To add an event receiver

  1. In Visual Studio, on the Project menu, click Add New Item.
  2. In the Add New Item dialog box, in the Project Templates pane, click the Visual C# SharePoint 2010 template.
  3. In the Templates pane, click Event Receiver.
  4. Name the event receiver ConvertWordToPDFEventReceiver and then click Add.
    Figure 4. Adding an Event Receiver

    Adding an event receiver

  5. The event receiver converts Word Documents after they are added to the List. Select the An item was added item from the list of events that can be handled.
    Figure 5. Choosing Event Receiver Settings

    Choosing even receiver settings

  6. Click Finish to add the event receiver to the project.

Adding the Sample Code to the Solution

Replace the contents of the ConvertWordToPDFEventReceiver.cs source file with the following code.C

 
using System;
using System.Security.Permissions;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Security;
using Microsoft.SharePoint.Utilities;
using Microsoft.SharePoint.Workflow;

using Microsoft.Office.Word.Server.Conversions;

namespace ConvertWordToPDF.ConvertWordToPDFEventReceiver
{
  /// <summary>
  /// List Item Events
  /// </summary>
  public class ConvertWordToPDFEventReceiver : SPItemEventReceiver
  {
    /// <summary>
    /// An item was added.
    /// </summary>
    public override void ItemAdded(SPItemEventProperties properties)
    {
      base.ItemAdded(properties);

      // Verify the document added is a Word document
      // before starting the conversion.
      if (properties.ListItem.Name.Contains(".docx") 
        || properties.ListItem.Name.Contains(".doc"))
      {
        //Variables used by the sample code.
        ConversionJobSettings jobSettings;
        ConversionJob pdfConversion;
        string wordFile;
        string pdfFile;

        // Initialize the conversion settings.
        jobSettings = new ConversionJobSettings();
        jobSettings.OutputFormat = SaveFormat.PDF;

        // Create the conversion job using the settings.
        pdfConversion = 
          new ConversionJob("Word Automation Services", jobSettings);

        // Set the credentials to use when running the conversion job.
        pdfConversion.UserToken = properties.Web.CurrentUser.UserToken;

        // Set the file names to use for the source Word document
        // and the destination PDF document.
        wordFile = properties.WebUrl + "/" + properties.ListItem.Url;
        if (properties.ListItem.Name.Contains(".docx"))
        {
          pdfFile = wordFile.Replace(".docx", ".pdf");
        }
        else
        {
          pdfFile = wordFile.Replace(".doc", ".pdf");
        }

        // Add the file conversion to the conversion job.
        pdfConversion.AddFile(wordFile, pdfFile);

        // Add the conversion job to the Word Automation Services 
        // conversion job queue. The conversion does not occur
        // immediately but is processed during the next run of
        // the document conversion job.
        pdfConversion.Start();

      }
    }
  }
}

Examples of the kinds of operations supported by Word Automation Services are as follows:

  • Converting between document formats (e.g. DOC to DOCX)
  • Converting to fixed formats (e.g. PDF or XPS)
  • Updating fields
  • Importing “alternate format chunks”

This article contains sample code that shows how to create a SharePoint list event handler that can create Word Automation Services conversion jobs in response to Word documents being added to the list. This section uses code examples taken from the complete, working sample code provided earlier in this article to describe the approach taken by this article.

The ItemAdded(SPItemEventProperties) event handler in the list event handler first verifies that the item added to the document library list is a Word document by checking the name of the document for the .doc or .docx file name extension.

// Verify the document added is a Word document
// before starting the conversion.
if (properties.ListItem.Name.Contains(".docx") 
    || properties.ListItem.Name.Contains(".doc"))
{

If the item is a Word document then the code creates and initializes ConversionJobSettings and ConversionJob objects to convert the document to the PDF format.

C#
 
//Variables used by the sample code.
ConversionJobSettings jobSettings;
ConversionJob pdfConversion;
string wordFile;
string pdfFile;

// Initialize the conversion settings.
jobSettings = new ConversionJobSettings();
jobSettings.OutputFormat = SaveFormat.PDF;

// Create the conversion job using the settings.
pdfConversion = 
  new ConversionJob("Word Automation Services", jobSettings);

// Set the credentials to use when running the conversion job.
pdfConversion.UserToken = properties.Web.CurrentUser.UserToken;

The Word document to be converted and the name of the PDF document to be created are added to the ConversionJob.

 
// Set the file names to use for the source Word document
// and the destination PDF document.
wordFile = properties.WebUrl + "/" + properties.ListItem.Url;
if (properties.ListItem.Name.Contains(".docx"))
{
  pdfFile = wordFile.Replace(".docx", ".pdf");
}
else
{
  pdfFile = wordFile.Replace(".doc", ".pdf");
}

// Add the file conversion to the Conversion Job.
pdfConversion.AddFile(wordFile, pdfFile);

Finally the ConversionJob is added to the Word Automation Services conversion job queue.

 
// Add the conversion job to the Word Automation Services 
// conversion job queue. The conversion does not occur
// immediately but is processed during the next run of
// the document conversion job.
pdfConversion.Start();

How To : Unit Testing Dynamics CRM 2011 Pipeline Plugins using Rhino Mocks

This example aims to unit test the SDK sample Plugins, and demonstrates the following:

  • Mocking the pipeline context, target and output property bags.
    Mocking the Organisation Service.
    How to assert that exceptions are raised by a plugin
    How to assert that the correct Organisation Service method was called with the desired values.
    To build the examples, you’ll need the CRM2011 SDK example plugins and Rhino Mocks 3.6 (http://ayende.com/Blog/archive/2009/09/01/rhino-mocks-3.6.aspx).

The key principle of mocking is that we can exercise and examine the code that we need to test without executing the bits that are not being tested. By mocking we are fixing behaviour and return values of the dependant code so that we can assert if the results are what we expect.

This approach supports Test Driven Development (TDD), where the test is written first and then the desired functionality is added in order that the test passes. We can then say we are ‘Done’ when all tests pass.

So in our example, the Followup Plugin should create a task with the regarding id set to the id of the account. So by mocking the pipeline context, we can specify the account id, and check that the resulting task that is created is regarding the same account. Using Rhino Mocks allows us to create a mock Organisation Service and assert that the Create method was called passing a task with the desired attributes set.

 

[TestMethod]
public void FollowupPlugin_CheckFollowupCreated()
{
    RhinoMocks.Logger = new TextWriterExpectationLogger(Console.Out);
 
    // Setup Pipeline Execution Context Stub
    var serviceProvider = MockRepository.GenerateStub();
    var pipelineContext = MockRepository.GenerateStub();
    Microsoft.Xrm.Sdk.IPluginExecutionContext context = (Microsoft.Xrm.Sdk.IPluginExecutionContext)
        serviceProvider.GetService(typeof(Microsoft.Xrm.Sdk.IPluginExecutionContext));
    serviceProvider.Stub(x => x.GetService(typeof(Microsoft.Xrm.Sdk.IPluginExecutionContext))).Return(pipelineContext);
 
    // Add the target entity
    ParameterCollection inputParameters = new ParameterCollection();
    inputParameters.Add(“Target”, new Entity(“account”));
    pipelineContext.Stub(x => x.InputParameters).Return(inputParameters);
 
    // Add the output parameters
    ParameterCollection outputParameters = new ParameterCollection();
    Guid entityId= Guid.NewGuid();
 
    outputParameters.Add(“id”, entityId);
    pipelineContext.Stub(x => x.OutputParameters).Return(outputParameters);
 
    // Create mock OrganisationService
    var organizationServiceFactory = MockRepository.GenerateStub();
    serviceProvider.Stub(x => x.GetService(typeof(Microsoft.Xrm.Sdk.IOrganizationServiceFactory))).Return(organizationServiceFactory);
    var organizationService = MockRepository.GenerateMock();
    organizationServiceFactory.Stub(x => x.CreateOrganizationService(Guid.Empty)).Return(organizationService);
 
             
    // Execute Plugin
    FollowupPlugin plugin = new FollowupPlugin();
    plugin.Execute(serviceProvider);
 
    // Assert the task was created
    organizationService.AssertWasCalled(x => x.Create(Arg.Is.NotNull));
 
    organizationService.AssertWasCalled(x => x.Create(Arg.Matches(s =>
            ((EntityReference)s.Attributes[“regardingobjectid”]).Id.ToString() == entityId.ToString()
            &&
            s.Attributes[“subject”].ToString() == “Send e-mail to the new customer.”
            )));
           
}

The key thing to notice is that the only mock object here is the OrganisationService – all others are stubs. The difference between a stub and a mock is that the mock records the calls that are made so that they can be verified after the test has been run. In this case we are verifying that the Create method was called with the correct properties set on the task entity.

It’s worth noting the RhinoMocks.Logger assignment. This gives the Rhino logging output in the VS2010 test results; most helpful during debugging asserts that don’t do as you expect.

Looking at the sample Account Plugin, it throws an exception when the account number is already set – so what testing that plugin’s throw exceptions under some conditions.

Unfortunately, the standard VS2011 ExpectedExceptionAttribute doesn’t provide the functionality we need here since we can’t check exception attributes, nor can we run the tests in Debug without the Debugger breaking into the code even though the exception is marked as being expected. In order to get around this you can use this class :

using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
 
namespace Kb.Research.RhinoMocks.UnitTests.CusomAssertions
{
    ///
    /// Custom assertion class for unit testing expected exceptions. 
    /// A replacement for the ExpectedException attribute in MSTest
    ///
    public static class AssertException
    {
        #region Methods
 
        ///
        /// Validates that the supplied delegate throws an exception of the supplied type
        ///
        /// Type of exception that is expected to be thrown
        /// <param name=”actionThatThrows”>Delegate that is expected to throw an exception of type
        public static void Throws<TExpectedExceptionType>(Action actionThatThrows) where TExpectedExceptionType : Exception
        {
            try
            {
                actionThatThrows();
            }
            catch (Exception ex)
            {
                Assert.IsInstanceOfType(ex, typeof(TExpectedExceptionType), String.Format(“Expected exception of type {0} but exception of type {1} was thrown.”, typeof(TExpectedExceptionType), ex.GetType()));
                return;
            }
            Assert.Fail(String.Format(“Expected exception of type {0} but no exception was thrown.”, typeof(TExpectedExceptionType)));
        }
 
        ///
        /// Validates that the supplied delegate throws an exception of the supplied type
        ///
        /// Type of exception that is expected to be thrown
        /// <param name=”expectedMessage”>Expected message that will be included in the thrown exception
        /// <param name=”actionThatThrows”>Delegate that is expected to throw an exception of type
        public static void Throws<TExpectedExceptionType>(string expectedMessage, Action actionThatThrows) where TExpectedExceptionType : Exception
        {
            try
            {
                actionThatThrows();
            }
            catch (Exception ex)
            {
                Assert.IsInstanceOfType(ex, typeof(TExpectedExceptionType), String.Format(“Expected exception of type {0} but exception of type {1} was thrown.”, typeof(TExpectedExceptionType), ex.GetType()));
                Assert.AreEqual(expectedMessage, ex.Message, String.Format(“Expected exception with message ‘{0}’ but exception with message ‘{1}’ was thrown.”, ex.Message, expectedMessage));
                return;
            }
            Assert.Fail(String.Format(“Expected exception of type {0} but no exception was thrown.”, typeof(TExpectedExceptionType)));
        }
 
        #endregion
 
    }
}

So, we want to test that if the account number is already set on execution of the AccountNumberPlugin, then an exception is raised:

[TestMethod]
public void AccountNumberPlugin_CheckExceptionIfAccountNumberSetAllready()
{
    // Setup Pipeline Execution Context Stub
    var serviceProvider = MockRepository.GenerateStub();
    var pipelineContext = MockRepository.GenerateStub();
    Microsoft.Xrm.Sdk.IPluginExecutionContext context = (Microsoft.Xrm.Sdk.IPluginExecutionContext)
        serviceProvider.GetService(typeof(Microsoft.Xrm.Sdk.IPluginExecutionContext));
    serviceProvider.Stub(x => x.GetService(typeof(Microsoft.Xrm.Sdk.IPluginExecutionContext))).Return(pipelineContext);
 
    // Add the target entity
    ParameterCollection inputParameters = new ParameterCollection();
    inputParameters.Add(“Target”,new Entity(“account”) { Attributes = new AttributeCollection  {
            new KeyValuePair(“accountnumber”, “123”)
        }});
 
    pipelineContext.Stub(x => x.InputParameters).Return(inputParameters);
 
    // Test that an exception is thrown if the account number already exists
    AccountNumberPlugin plugin = new AccountNumberPlugin();
    AssertException.Throws < InvalidPluginExecutionException> (“The account number can only be set by the system.”,(Action) delegate {
            plugin.Execute(serviceProvider);
            });
 
}

The examples above show how to test plugins that don’t call any external services or code – where all dependencies are discovered via the execution context. Next time I’ll provide an example of how to mock an external service using inversion of control.

New Highly Customisable SharePoint CRM Template Available

A CRM/Project Management Site Template for SharePoint 2010 Enterprise or SharePoint Online tenants.
This extensive solution offers the following features:

  • User Friendly – Due to a custom User Interface & Pre-Populated InfoPath forms where possible
  • Contacts Management
  • Project Management – Associated sub tasks, documents & sales
  • Products & Services Catalog
  • Sales Register & Invoice Generation
  • Client Enquiry – Showing any items related to a client
  • Reporting
  • Integrated User Guide

To give you an idea of how SharePoint CRM looks, below is a selection of screenshots:

Home Screen

The buttons displayed are defined by a list within SharePoint CRM so can easily be modified.

Contacts

Project Management

Sales Register

Being SharePoint all aspects of the SharePoint CRM Template can be customised to meet your organisations needs:

For example, to customise the home page :

Customising your homepage

Rather than the buttons on the homepage being hardcoded, they are defined by a list within the CRM site. This means you can easily add/remove/edit buttons using just your web browser, here’s how to do so:

  1. From the Site Actions, choose View All Site Content
  2. Open the PortalMenu list

Items can then be edited in the same way as any other SharePoint 2010 list, below is a description of options available:

  • Section – Defines which section the button will be shown in on your homepage
  • Order – Defines the order of the buttons within a section
  • Button Name – The text that will be displayed within the button
  • Link – The URL that users will be taken to when the button is clicked
  • Hover – The text shown when a user hovers over a button
  • Dialog – Specifies whether the page that the button links to will open in a popup dialog box
  • New Project Form – If this option is checked the Link field will be ignored an the button will open the new project form.

Adding new Sections

New sections can be added to the homepage, but will required the use of SharePoint Designer to do so:

  1. Open the PortalMenu list as described above, the go to the List Settings page
  2. Edit the Section column, then add the name of your new section as a choice
  3. Create a new list item with your new choice set within the Section field
  4. Navigate to your homepage, and set the page to edit mode
  5. Export any sections web part, then import the web part and add it to any zone
  6. Using SharePoint Designer open the homepage (SitePages\default.aspx)
  7. Select the new web part, the update the filter to match the new choice you added to the section column

This template and other SharePoint Web Parts, Apps, Custom SharePoint Templates, Tools for SharePoint, Azure and Office365 is available by contacting me through my website at http://sharepointsamurai,wordpress.com/ 

New SharePoint 2010 & 2013, Online Connector for Outlook available!!

SharePoint Connector  For Outlook makes it easier for office users to upload emails to SharePoint and attach SharePoint documents to an email message .

 

Please contact me through my blog or at tomas.floyd@outlook.com for more information on pricing, licensing and trials.

 

   SharePointOutlookConnectorAttachingSharePointOutlookExplorer_2

FEATURES

  • Workflow integration
  • Attach items as attachments into email item functionality
  • Multiple site configuration
  • Check Out/Check In functionality
  • Drag & Drop files(attachments) into a folder
  • Copy, delete and open item/document functionalities
  • Version History for files
  • Saving email meta data to SharePoint document item (title, to, from etc) functionality
  • Works with all SharePoint versions
  • Saving email message as list item and attachments as attachment of the list item functionality
  • Template based search functionality
  • Windows explorer right click upload
  • Editing metadata on uploading document
  • Multiple site configuration
  • File System Integration has been added. (This allows you to integrate live mesh folders into outlook)
  • Attach items as attachments into email item functionality
  • Advanced Alert System
  • Library Content Viewer
  • Saving email message as list item and attachments as attachment of the list item functionality

SharePoint Online: Software Boundaries, Limits and Planning Guide

This article describes some important limitations that you might need to know for different SharePoint Online plans in Office 365.
For example, it provides information about number of supported users, storage quotas, and file-size limits. This article covers a range of plans:
SharePoint Online in Office 365 Small Business and in Office 365 Enterprise, plus standalone plans.
The limits that are listed are for paid subscriptions. You might see different limits for trial plans andSharePoint Online preview sites. 

Note    In Office 365 plans, software boundaries and limits for SharePoint Online are managed separately from mailbox storage limits. Mailbox storage limits are set up and managed by using Exchange Online. For more information about how Exchange manages mailbox limits, see Mailbox types and storage limits for Recipients.

In this article

SharePointOnline2L-1[1]

 

SharePoint Online Feature availability

Need help determining which SharePoint solution best fits your organization’s needs?

The various Office 365 plans include different SharePoint Online offerings. These include:

  • SharePoint Online for Office 365 Small Business
  • SharePoint Online for Office 365 Midsize Business
  • SharePoint Online for Office 365 Enterprise, Education, and Government

You can choose the plan that best fits your organization’s needs. Each person who accesses the SharePoint Online service must be assigned to a subscription plan. SharePoint Online can be included in a Microsoft Office 365 plan, or it can be purchased as a standalone plan, such as SharePoint Enterprise Plan 1 or SharePoint Enterprise Plan 2.

Limits in SharePoint Online in Office 365 plans

In this section:

Limits for SharePoint Online for Office 365 Small Business

SharePoint Online Small Business and SharePoint Online Small Business Premium have common boundaries and limits. The following table describes those limits.

Feature Description
Storage per user (contributes to total storage base of tenant) 500 megabytes (MB) per subscribed user.
Site collection quota limit Up to 1 TB per site collection. (25 GB for a trial).

5,000 items in site libraries, including files and folders.

The minimum storage allocation per site collection is 100 MB.

Site collections (#) per tenant 1 site collection per tenant.
Subsites Up to 2,000 subsites per site collection
Total available tenant storage 10 GB + 500 MB per user.

For example, if you have 10 users, the base storage allocation is 15 GB (10 GB + 500 MB * 10 users).

You can purchase additional storage up to a maximum of 1TB.

Personal site storage 1 TB per user, as soon as provisioned.

This amount is counted separately, and does not add to or subtract from the overall storage allocation for a tenant. Personal site storage applies to a user’s OneDrive for Business library and personal newsfeed. For more information, see Additional information about OneDrive for Business limits.

Public Website storage default 5 GB

A SharePoint admin can allocate up to 1 TB (the limit for a site collection).

File upload limit 2 GB per file.
File attachment size limit 250 MB
Sync limits 20,000 items in the OneDrive for Business library, including files and folders.

5,000 items in site libraries, including files and folders.

Number of users 1 – 25 users
Number of external users invitees There is no limit to number of external users you can invite to your SharePoint Online site Collections. For more information, see Manage external sharing for your SharePoint Online environment

When reviewing the information on the previous table, remember that the base storage limits for Office 365 for Small Business (10 GB + 500 MB per subscribed user) will affect some of these values. For example, although SharePoint Online for Small Business imposes a limit of 1 TB per site collection, your particular tenant might not have enough storage available to contain a site collection of 1 TB.

 

 Important    It’s a good idea to monitor the Recycle Bin and empty it regularly. Content in the Recycle Bin is counted against the storage quota for a tenant. For example, if the Recycle Bin on a site contains 5 GB of content, that 5 GB is subtracted from the available storage.

 

Limits for SharePoint Online for Office 365 Midsize Business

The following table shows the software boundaries and limits for the SharePoint Online Midsize Business plan.

Feature Description
Storage per user (contributes to total storage base of tenant) 500 megabytes (MB) per subscribed user.
Storage base per tenant 10 GB + 500 MB per subscribed user.

For example, if you have 250 users, the base storage allocation is 135 GB (10 GB + 500 MB * 250 users).

You can purchase additional storage up to a maximum of 20 TB.

Additional storage at a cost per GB per month. To buy storage, see Change storage space for your subscription.

 Important    You can’t buy additional storage for a trial subscription.

Site collection quota limit Up to 1 TB per site collection. (25 GB for a trial).

5,000 items in site libraries, including files and folders.

SharePoint admins can set storage limits for site collections and sites. The minimum storage allocation per site collection is 100 MB.

Site collections (#) per tenant 20 site collections (other than personal sites).
Subsites Up to 2,000 subsites per site collection.
Personal site storage 1TB per user, as soon as provisioned.

Personal site storage applies to a user’s OneDrive for Business library and personal newsfeed. This amount is counted separately, and does not add to or subtract the overall storage allocation for a tenant. For more information about OneDrive for Business, see Additional information about OneDrive for Business limits later in this article.

Public Website storage default 5 GB

A SharePoint admin can allocate up to 1 TB (the limit for a site collection).

File upload limit 2 GB per file.
File attachment size limit 250 MB
Sync limits 20,000 items in the OneDrive for Business library, including files and folders.

5,000 items in site libraries, including files and folders.

Number of users 1 – 250 users
Number of external user invitees There is no limit to number of external users you can invite to your SharePoint Online site Collections. For more information see, Manage external sharing for your SharePoint Online environment

When reviewing the information on the previous table, remember that the base storage limits for Office 365 for Midsize Business (10 GB + 500 MB per subscribed user) will affect some of these values. For example, although SharePoint Online for Midsize Business imposes a limit of 1 TB per site collection and a limit of 20 site collections, your particular tenant might not have enough storage available to contain 20 site collections of 1 TB each.

 Important    It’s a good idea to monitor the Recycle Bin and empty it regularly. Content in the Recycle Bin is counted against the storage quota for a tenant. For example, if the Recycle Bin on a site contains 25 GB of content, that 25 GB is subtracted from the available storage.

 

 

Limits for SharePoint Online for Office 365 Enterprise, Education, and Government

One or more Office 365 subscriptions plans can be included as part of your subscription. This is true for the following plan offerings:

  • Microsoft Office 365 Enterprise subscriptions (E1 – E4)
  • Microsoft Office 365 Government subscriptions (G1 – G4)
  • Microsoft Office 365 Education subscriptions (A2 – A4)
  • Microsoft Office 365 Kiosk subscriptions (K1-K2)
  • SharePoint Online stand-alone subscription plans (Plan 1 and Plan 2).

 

These plans have common boundaries and limits. The following table describes those limits.

 

 

Feature Office 365 Enterprise plans (including E1 – E4, A2-A4, G1-G4, and SharePoint Online Plan 1 and Plan 2) Office 365 Kiosk plans (Enterprise and Government K1 – K2)
Storage per user (contributes to total storage base of tenant) 500 megabytes (MB) per subscribed user. Zero (0).

Licensed Kiosk Workers do not add to the tenant storage base.

Additional storage (per GB per month); no minimum purchase To buy storage, see Change storage space for your subscription.

 Important    You can’t buy additional storage for a trial subscription.

To buy storage, see Change storage space for your subscription.

 Important    You can’t buy additional storage for a trial subscription.

Storage base per tenant 10 GB + 500 MB per subscribed user + additional storage purchased.

For example, if you have 10,000 users, the base storage allocation is approximately 5 TB (10 GB + 500 MB * 10,000 users).

You can purchase an unlimited amount of additional storage.

 Important    If you have a Government Community Cloud plan, you can purchase additional storage up to 25 TB.

10 GB + additional storage purchased.

You can purchase an unlimited amount of additional storage.

 Important    If you have a Government Community Cloud plan, you can purchase additional storage up to 25 TB.

Site collection storage limit Up to 1 TB per site collection. (25 GB for trial).

SharePoint admins can set storage limits for site collections and sites. The minimum storage allocation per site collection is 100 MB.

5,000 items in site libraries, including files and folders.

 Important    If you have a Government Community Cloud plan, the limit is 100 GB.

Up to 1 TB per site collection. (25 GB for a trial). SharePoint admins can set storage limits for site collections and sites. The minimum storage allocation per site collection is 100 MB.

 Important    If you have a Government Community Cloud plan, the limit is 100 GB.

Kiosk workers (plans K1-K2) cannot administer SharePoint site collections. You will need a license for at least one Enterprise plan user to manage Kiosk site collections.

Site collections (#) per tenant 500,000 site collections (other than personal sites). 500,000 site collections.
Subsites Up to 2,000 subsites per site collection Up to 2,000 subsites per site collection
Personal site storage 1 TB per user (100 GB for government plans), as soon as provisioned.

Personal site storage applies to a user’s OneDrive for Business library and personal newsfeed. This amount is counted separately, and does not add to or subtract the overall storage allocation for a tenant.

For more information about OneDrive for Business, see Additional information about OneDrive for Business limits later in this article.

Not available.
Public Website storage default 5 GB

A SharePoint admin can allocate up to 1 TB (the limit for a site collection).

5 GB

A SharePoint admin can allocate up to 1 TB (the limit for a site collection).

Kiosk workers (plans K1-K2) cannot administer Sharepoint site collections. You will need a license for at least one Enterprise plan user to manage Kiosk site collections.

File upload limit 2 GB per file. 2 GB per file.
File attachment size limit 250 MB 250 MB
Sync limits 20,000 items in the OneDrive for Business library, including files and folders.

5,000 items in site libraries, including files and folders.

20,000 items in the OneDrive for Business library, including files and folders.

5,000 items in site libraries, including files and folders.

Maximum number of users per tenant 1 – 500,000+

 Note    If you have more than 500,000 users, please contact the Microsoft representative to discuss detailed requirements.

1 – 500,000+

 Note    If you have more than 500,000 users, please contact the Microsoft representative to discuss detailed requirements.

Number of external user invitees There is no limit to number of external users you can invite to your SharePoint Online site Collections. For more information, see Manage external sharing for your SharePoint Online environment There is no limit to number of external users you can invite to your SharePoint Online site Collections. For more information, see Manage external sharing for your SharePoint Online environment

When reviewing the information on the previous table, remember that the base storage limits for Office 365 for Enterprises (10 GB + 500 MB per subscribed user) will affect some of these values. For example, although SharePoint Online for Enterprise plans imposes a limit of 1 TB per site collection and a limit of 500,000 site collections, your particular tenant might not have enough storage available to contain 500,000 site collections of 1 TB each.

 Important    It’s a good idea to monitor the Recycle Bin and empty it regularly. Content in the Recycle Bin is counted against the storage quota for a tenant. For example, if the Recycle Bin on a site contains 25 GB of content, that 25 GB is subtracted from the available storage.

 

 

Limits for site elements in SharePoint Online

There are also limits for site elements of a SharePoint Online site. Here are some examples:

  • List and Library limits    Different types of columns have different limitations. For example, you can have up to 276 columns in a list for columns that contain a single line of text.
  • Page limits    You can add up to 25 Web Parts to a single wiki or web page.
  • Security limits    Different security features have different limits. For example, a single user can belong to no more than 5,000 security groups.

 

The specific elements for the previous site elements are too numerous to list here, but you can learn more about them in the TechNet article Software Boundaries and Limits for SharePoint 2013. In this linked article, only the sections on List and Library Limits, Page Limits, and Security Limits apply to SharePoint Onl

 

Additional information about OneDrive for Business limits

Each user in SharePoint Online for Office 365 gets an individual storage allocation of 1 TB for personal site content (100 GB for government plans). Personal sites include the user’s OneDrive for Business library, a Recycle Bin, and personal newsfeed information.

All SharePoint Online in Office 365 plans include the same storage allocation for individual personal sites. This storage allocation is separate from the tenant allocation.

For more information about how users can manage their individual OneDrive for Business allocation, see OneDrive for Business library limits.

 

 

Additional Resources

 

For information about this: Go here:
Office 365 connectivity limits To learn more about Internet bandwidth, port and protocol considerations for Office 365 plans, see Office 365 Ports and Protocols.
SharePoint feature availability To learn more about SharePoint feature availability and the SharePoint Online service in Office 365, see SharePoint Online Service Descriptions.
SharePoint Online search limits To learn more about the search limits for SharePoint Online, see Search limits for SharePoint Online.
Mobile devices To learn more about opening a SharePoint Online site from a mobile device, see Use a mobile device to work with SharePoint Online sites.
File types To learn about file types that you can’t add to a list, see Types of files that cannot be added to a list or library.
Online URLs To learn about SharePoint Online addresses, see SharePoint Online URLs and IP Addresses.
Site languages To learn how to set language for your sites, see Change your language and region settings.
Planning and deploying SharePoint Online
Change storage space

 Important    You can’t buy additional storage for a trial subscription.

How To : SharePoint Cross-site Publishing and Free code for Web Part

Cross-site publishing is one of the powerful new capabilities in SharePoint 2013.  It enables the separation of data entry from display and breaks down the container barriers that have traditionally existed in SharePoint (ex: rolling up information across site collections). 

 cross-site-publishing

Cross-site publishing is delivered through search and a number of new features, including list/library catalogs, catalog connections, and the content search web part.  Unfortunately, SharePoint Online/Office 365 doesn’t currently support these features.  Until they are added to the service (possibly in a quarterly update), customers will be looking for alternatives to close the gap.  In this post, I will outline several alternatives for delivering cross-site and search-driven content in SharePoint Online and how to template these views for reuse

I’m a huge proponent of SharePoint Online.  After visiting several Microsoft data centers, I feel confident that Microsoft is better positioned to run SharePoint infrastructure than almost any organization in the world.  SharePoint Online has very close feature parity to SharePoint on-premise, with the primary gaps existing in cross-site publishing and advanced business intelligence.  Although these capabilities have acceptable alternatives in the cloud (as will be outlined in this post), organizations looking to maximize the cloud might consider SharePoint running in IaaS for immediate access to these features.

 

Apps for SharePoint

The new SharePoint app model is fully supported in SharePoint Online and can be used to deliver customizations to SharePoint using any web technology.  New SharePoint APIs can be used with the app model to deliver an experience similar to cross-site publishing.  In fact, the content search web part could be re-written for delivery through the app model as an “App Part” for SharePoint Online. 
Although the app model provides great flexibility and reuse, it does come with some drawbacks.  Because an app part is delivered through a glorified IFRAME, it would be challenging to navigate to a new page from within the app part.  A link within the app would only navigate within the IFRAME (not the parent of the IFRAME).  Secondly, there isn’t a great mechanism for templating a site to automatically leverage an app part on its page(s).  Apps do not work with site templates, so a site that contains an app cannot be saved as a template.  Apps can be “stapled” to sites, but the app installed event (which would be needed to add the app part to a page) only fires when the app is installed into the app catalog.

REST APIs and Script Editor

The script editor web part is a powerful new tool that can help deliver flexible customization into SharePoint Online.  The script editor web part allows a block of client-side script to be added to any wiki or web part page in a site.  Combined with the new SharePoint REST APIs, the script editor web part can deliver mash-ups very similar to cross-site publishing and the content search web part.  Unlike apps for SharePoint, the script editor isn’t constrained by IFRAME containers, app permissions, or templating limitations.  In fact, a well-configured script editor web part could be exported and re-imported into the web part gallery for reuse.

Cross-site publishing leverages “catalogs” for precise querying of specific content.  Any List/Library can be designated as a catalog.  By making this designation, SharePoint will automatically create managed properties for columns of the List/Library and ultimately generate a search result source in sites that consume the catalog.  Although SharePoint Online doesn’t support catalogs, it support the building blocks such as managed properties and result sources.  These can be manually configured to provide the same precise querying in SharePoint Online and exploited in the script editor web part for display.

Calling Search REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “search/query?Querytext=’ContentType:News'”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

An easier approach might be to directly reference a list/library in the REST call of our client-side script.  This wouldn’t require manual search configuration and would provide real-time publishing (no waiting for new items to get indexed).  You could think of this approach similar to a content by query web part across site collections (possibly even farms) and the REST API makes it all possible!

List REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

The content search web part uses display templates to render search results in different arrangements (ex: list with images, image carousel, etc).  There are two types of display templates the content search web part leverages…the control template, which renders the container around the items, and the item template, which renders each individual item in the search results.  This is very similar to the way a Repeater control works in ASP.NET.  Display templates are authored using HTML, but are converted to client-side script automatically by SharePoint for rendering.  I mention this because our approach is very similar…we will leverage a container and then loop through and render items in script.  In fact, all the examples in this post were converted from display templates in a public site I’m working on. 

Item display template for content search web part

<!–#_
var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId() + “_ImageTitle_”);
var rem = index % 3;
var even = true;
if (rem == 1)
    even = false;

var pictureURL = $getItemValue(ctx, “Picture URL”);
var pictureId = encodedId + “picture”;
var pictureMarkup = Srch.ContentBySearch.getPictureMarkup(pictureURL, 140, 90, ctx.CurrentItem, “mtcImg140”, line1, pictureId);
var pictureLinkId = encodedId + “pictureLink”;
var pictureContainerId = encodedId + “pictureContainer”;
var dataContainerId = encodedId + “dataContainer”;
var dataContainerOverlayId = encodedId + “dataContainerOverlay”;
var line1LinkId = encodedId + “line1Link”;
var line1Id = encodedId + “line1”;
 _#–>
<div style=”width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;”>
   <a href=”_#= linkURL =#_”>
      <div style=”float: left; width: 140px; padding-right: 10px;”>
         <img src=”_#= pictureURL =#_” class=”mtcImg140″ style=”width: 140px;” />
      </div>
      <div style=”float: left; width: 170px”>
         <div class=”mtcProfileHeader mtcProfileHeaderP”>_#= line1 =#_</div>
      </div>
   </a>
</div>

 

Script equivalent

<div id=”divUnfeaturedNews”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //get the details for each item
                var listData = data.d.results;
                var itemCount = listData.length;
                var processedCount = 0;
                var ul = $(“<ul style=’list-style-type: none; padding-left: 0px;’ class=’cbs-List’>”);
                for (i = 0; i < listData.length; i++) {
                    $.ajax({
                        url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
                        type: “GET”,
                        headers: { “Accept”: “application/json;odata=verbose” },
                        success: function (data) {
                            processedCount++;
                            var htmlStr = “<li style=’display: inline;’><div style=’width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;’>”;
                            htmlStr += “<a href=’#’>”;
                            htmlStr += “<div style=’float: left; width: 140px; padding-right: 10px;’>”;
                            htmlStr += setImageWidth(data.d.PublishingRollupImage, ‘140’);
                            htmlStr += “</div>”;
                            htmlStr += “<div style=’float: left; width: 170px’>”;
                            htmlStr += “<div class=’mtcProfileHeader mtcProfileHeaderP’>” + data.d.Title + “</div>”;
                            htmlStr += “</div></a></div></li>”;
                            ul.append($(htmlStr))
                            if (processedCount == itemCount) {
                                $(“#divUnfeaturedNews”).append(ul);
                            }
                        },
                        error: function (data) {
                            alert(data.statusText);
                        }
                    });
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });

    function setImageWidth(imgString, width) {
        var img = $(imgString);
        img.css(‘width’, width);
        return img[0].outerHTML;
    }
</script>

 

Even one of the more complex carousel views from my site took less than 30min to convert to the script editor approach.

Advanced carousel script

<div id=”divFeaturedNews”>
    <div class=”mtc-Slideshow” id=”divSlideShow” style=”width: 610px;”>
        <div style=”width: 100%; float: left;”>
            <div id=”divSlideShowSection”>
                <div style=”width: 100%;”>
                    <div class=”mtc-SlideshowItems” id=”divSlideShowSectionContainer” style=”width: 610px; height: 275px; float: left; border-style: none; overflow: hidden; position: relative;”>
                        <div id=”divFeaturedNewsItemContainer”>
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 1&$top=4”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                var listData = data.d.results;
                for (i = 0; i < listData.length; i++) {
                    getItemDetails(listData, i, listData.length);
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });
    var processCount = 0;
    function getItemDetails(listData, i, count) {
        $.ajax({
            url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                processCount++;
                var itemHtml = “<div class=’mtcItems’ id=’divPic_” + i + “‘ style=’width: 610px; height: 275px; float: left; position: absolute; border-bottom: 1px dotted #ababab; z-index: 1; left: 0px;’>”
                itemHtml += “<div id=’container_” + i + “‘ style=’width: 610px; height: 275px; float: left;’>”;
                itemHtml += “<a href=’#’ title='” + data.d.Caption_x005f_x0020_x005f_Title + “‘ style=’width: 610px; height: 275px;’>”;
                itemHtml += data.d.Feature_x005f_x0020_x005f_Image;
                itemHtml += “</a></div></div>”;
                itemHtml += “<div class=’titleContainerClass’ id=’divTitle_” + i + “‘ data-originalidx='” + i + “‘ data-currentidx='” + i + “‘ style=’height: 25px; z-index: 2; position: absolute; background-color: rgba(255, 255, 255, 0.8); cursor: pointer; padding-right: 10px; margin: 0px; padding-left: 10px; margin-top: 4px; color: #000; font-size: 18px;’ onclick=’changeSlide(this);’>”;
                itemHtml += data.d.Caption_x005f_x0020_x005f_Title;
                itemHtml += “<span id=’currentSpan_” + i + “‘ style=’display: none; font-size: 16px;’>” + data.d.Caption_x005f_x0020_x005f_Body + “</span></div>”;
                $(‘#divFeaturedNewsItemContainer’).append(itemHtml);

                if (processCount == count) {
                    allItemsLoaded();
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    }
    window.mtc_init = function (controlDiv) {
        var slideItems = controlDiv.children;
        for (var i = 0; i < slideItems.length; i++) {
            if (i > 0) {
                slideItems[i].style.left = ‘610px’;
            }
        };
    };

    function allItemsLoaded() {
        var slideshows = document.querySelectorAll(“.mtc-SlideshowItems”);
        for (var i = 0; i < slideshows.length; i++) {
            mtc_init(slideshows[i].children[0]);
        }

        var div = $(‘#divTitle_0’);
        cssTitle(div, true);
        var top = 160;
        for (i = 1; i < 4; i++) {
            var divx = $(‘#divTitle_’ + i);
            cssTitle(divx, false);
            divx.css(‘top’, top);
            top += 35;
        }
    }

 

bottlenecks[1]

 

    function cssTitle(div, selected) {
        if (selected) {
            div.css(‘height’, ‘auto’);
            div.css(‘width’, ‘300px’);
            div.css(‘top’, ’10px’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’26px’);
            div.css(‘padding-top’, ‘5px’);
            div.css(‘padding-bottom’, ‘5px’);
            div.find(‘span’).css(‘display’, ‘block’);
        }
        else {
            div.css(‘height’, ’25px’);
            div.css(‘width’, ‘auto’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’18px’);
            div.css(‘padding-top’, ‘0px’);
            div.css(‘padding-bottom’, ‘0px’);
            div.find(‘span’).css(‘display’, ‘none’);
        }
    }

    window.changeSlide = function (item) {
        //get all title containers
        var listItems = document.querySelectorAll(‘.titleContainerClass’);
        var currentIndexVals = { 0: null, 1: null, 2: null, 3: null };
        var newIndexVals = { 0: null, 1: null, 2: null, 3: null };

        for (var i = 0; i < listItems.length; i++) {
            //current Index
            currentIndexVals[i] = parseInt(listItems[i].getAttribute(‘data-currentidx’));
        }

        var selectedIndex = 0; //selected Index will always be 0
        var leftOffset = ”;
        var originalSelectedIndex = ”;

        var nextSelected = ”;
        var originalNextIndex = ”;

        if (item == null) {
            var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
            originalSelectedIndex = parseInt(item0.getAttribute(‘data-originalidx’));
            originalNextIndex = originalSelectedIndex + 1;
            nextSelected = currentIndexVals[0] + 1;
        }
        else {
            nextSelected = item.getAttribute(‘data-currentidx’);
            originalNextIndex = item.getAttribute(‘data-originalidx’);
        }

        if (nextSelected == 0) { return; }

        for (i = 0; i < listItems.length; i++) {
            if (currentIndexVals[i] == selectedIndex) {
                //this is the selected item, so move to bottom and animate
                var div = $(‘[data-currentidx=”0″]’);
                cssTitle(div, false);
                div.css(‘left’, ‘-400px’);
                div.css(‘top’, ‘230px’);

                newIndexVals[i] = 3;
                var item0 = document.querySelector(‘[data-currentidx=”0″]’);
                originalSelectedIndex = item0.getAttribute(‘data-originalidx’);

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else if (currentIndexVals[i] == nextSelected) {
                //this is the NEW selected item, so resize and slide in as selected
                var div = $(‘[data-currentidx=”‘ + nextSelected + ‘”]’);
                cssTitle(div, true);
                div.css(‘left’, ‘-610px’);

                newIndexVals[i] = 0;

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else {
                //move up in queue
                var curIdx = currentIndexVals[i];
                var div = $(‘[data-currentidx=”‘ + curIdx + ‘”]’);

                var topStr = div.css(‘top’);
                var topInt = parseInt(topStr.substring(0, topStr.length – 1));

                if (curIdx != 1 && nextSelected == 1 || curIdx > nextSelected) {
                    topInt = topInt – 35;
                    if (curIdx – 1 == 2) { newIndexVals[i] = 2 };
                    if (curIdx – 1 == 1) { newIndexVals[i] = 1 };
                }

                //move up
                div.animate(
                    { top: topInt }, 500, function () {
                    });
            }
        };

        if (originalNextIndex < 0)
            originalNextIndex = itemCount – 1;

        //adjust pictures
        $(‘#divPic_’ + originalNextIndex).css(‘left’, ‘610px’);
        leftOffset = ‘-610px’;

        $(‘#divPic_’ + originalSelectedIndex).animate(
            { left: leftOffset }, 500, function () {
            });

        $(‘#divPic_’ + originalNextIndex).animate(
            { left: ‘0px’ }, 500, function () {
            });

        var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
        var item1 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[1] + ‘”]’);
        var item2 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[2] + ‘”]’);
        var item3 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[3] + ‘”]’);
        if (newIndexVals[0] != null) { item0.setAttribute(‘data-currentidx’, newIndexVals[0]) };
        if (newIndexVals[1] != null) { item1.setAttribute(‘data-currentidx’, newIndexVals[1]) };
        if (newIndexVals[2] != null) { item2.setAttribute(‘data-currentidx’, newIndexVals[2]) };
        if (newIndexVals[3] != null) { item3.setAttribute(‘data-currentidx’, newIndexVals[3]) };
    };
</script>

 

End-result of script editors in SharePoint Online

Separate authoring site collection

Final Thoughts

Several new SharePoint Document Management, Upload Web Parts and Tools are available for Sale on my Web Site – https://sharepointsamurai.wordpress.com

Please contact me via my web site and blog or at tomas.floyd@outlook.com for pricing and trials

 

 

Bulk File SharePoint Upload Web Part

  • Run it anywhere. Every action is done using the Client Object Model, there is no need to install it on the server.
  • Support for Mixed authentication, both Windows and Forms
  • Import folders and files with all subfolders
  • Retain creation and modified date fields after moving to SharePoint
  • Retain author/editor fields from office documents after moving to SharePoint
  • Incompatible file names are renamed (filename too long, illegal characters)
  • Unsupported files (large filesize, blocked file extension, …) are skipped
  • Files already existing on SharePoint are skipped (no overwrite)
  • Successfully migrated files and folders can be moved to an “archive” folder
  • Detailed log (Log4Net) with info on migration issues
  • Option to skip creation of empty folders in SharePoint
  • Option to merge subfolders to a flat list in SharePoint.
  • Easy to run in batch when migrating several locations

logging2

 

Drag n Drop Upload Web Part

Enables Firefox & Chrome users to drag & drop files directly into their SharePoint document libraries. This makes single and multiple file upload much simplier & user friendly. SharePoint 2010 and SharePoint Online are supported.

Document set content web part is supported

Screen

Various New SharePoint Document Tools and Web Parts Available – See Blog for more details

Several new SharePoint Document Management, Upload Web Parts and Tools are available for Sale on my Web Site – https://sharepointsamurai.wordpress.com

 

Please contact me via my web site and blog or at tomas.floyd@outlook.com for pricing and trials

 

Bulk File SharePoint Upload Web Part

  • Run it anywhere. Every action is done using the Client Object Model, there is no need to install it on the server.
  • Support for Mixed authentication, both Windows and Forms
  • Import folders and files with all subfolders
  • Retain creation and modified date fields after moving to SharePoint
  • Retain author/editor fields from office documents after moving to SharePoint
  • Incompatible file names are renamed (filename too long, illegal characters)
  • Unsupported files (large filesize, blocked file extension, …) are skipped
  • Files already existing on SharePoint are skipped (no overwrite)
  • Successfully migrated files and folders can be moved to an “archive” folder
  • Detailed log (Log4Net) with info on migration issues
  • Option to skip creation of empty folders in SharePoint
  • Option to merge subfolders to a flat list in SharePoint.
  • Easy to run in batch when migrating several locations

logging2

 

Drag n Drop Upload Web Part

Enables Firefox & Chrome users to drag & drop files directly into their SharePoint document libraries. This makes single and multiple file upload much simplier & user friendly. SharePoint 2010 and SharePoint Online are supported.

Document set content web part is supported

Screen

 

 

A Look at : ‘Kaizan” and its Philospophy in Kanban

kaizen%20not%20kaizan[1]

Kaizen, or rapid improvement processes, often is considered to be the “building block” of all lean production methods. Kaizen focuses on eliminating waste, improving productivity, and achieving sustained continual improvement in targeted activities and processes of an organization.

Lean production is founded on the idea of kaizen – or continual improvement. This philosophy implies that small, incremental changes routinely applied and sustained over a long period result in significant improvements. The kaizen strategy aims to involve workers from multiple functions and levels in the organization in working together to address a problem or improve a process.

The team uses analytical techniques, such as value stream mapping and “the 5 whys”, to identify opportunities quickly to eliminate waste in a targeted process or production area. The team works to implement chosen improvements rapidly (often within 72 hours of initiating the kaizen event), typically focusing on solutions that do not involve large capital outlays.

Periodic follow-up events aim to ensure that the improvements from the kaizen “blitz” are sustained over time. Kaizen can be used as an analytical method for implementing several other lean methods, including conversions to cellular manufacturing and just-in-time production systems.

Top of page

Method and Implementation Approach

Rapid continual improvement processes typically require an organization to foster a culture where employees are empowered to identify and solve problems. Most organizations implementing kaizen-type improvement processes have established methods and ground rules that are well communicated in the organization and reinforced through training. The basic steps for implementing a kaizen “event” are outlined below, although organizations typically adapt and sequence these activities to work effectively in their unique circumstances.

Phase 1: Planning and Preparation. The first challenge is to identify an appropriate target area for a rapid improvement event. Such areas might include: areas with substantial work-in-progress; an administrative process or production area where significant bottlenecks or delays occur; areas where everything is a “mess” and/or quality or performance does not meet customer expectations; and/or areas that have significant market or financial impact (i.e., the most “value added” activities).

Once a suitable production process, administrative process, or area in a factory is selected, a more specific “waste elimination” problem within that area is chosen for the focus of the kaizen event ( i.e., the specific problem that needs improvement, such as lead time reduction, quality improvement, or production yield improvement). Once the problem area is chosen, managers typically assemble a cross-functional team of employees.

It is important for teams to involve workers from the targeted administrative or production process area, although individuals with “fresh perspectives” may sometimes supplement the team. Team members should all be familiar with the organization’s rapid improvement process or receive training on it prior to the “event”. Kaizen events are generally organized to last between one day and seven days, depending on the scale of the targeted process and problem. Team members are expected to shed most of their operational responsibilities during this period, so that they can focus on the kaizen event.

Phase 2: Implementation. The team first works to develop a clear understanding of the “current state” of the targeted process so that all team members have a similar understanding of the problem they are working to solve. Two techniques are commonly used to define the current state and identify manufacturing wastes:

  • Five Whys. Toyota developed the practice of asking “why” five times and answering it each time to uncover the root cause of a problem. An example is shown below.Repeating “Why” Five Times1
    1. Why did the machine stop?
      There was an overload, and the fuse blew.
    2. Why was there an overload?
      The bearing was not sufficiently lubricated.
    3. Why was it not lubricated sufficiently?
      The lubrication pump was not pumping sufficiently.
    4. Why was it not pumping sufficiently?
      The shaft of the pump was worn and rattling.
    5. Why was the shaft worn out?
      There was no strainer attached, and metal scrap got in.
  • Value Stream Mapping. This technique involves flowcharting the steps, activities, material flows, communications, and other process elements that are involved with a process or transformation (e.g., transformation of raw materials into a finished product, completion of an administrative process). Value stream mapping helps an organization identify the non-value-adding elements in a targeted process. This technique is similar to process mapping, which is frequently used to support pollution prevention planning in organizations. In some cases, value stream mapping can be used in phase 1 to identify areas for which to target kaizen events.

During the kaizen event, it is typically necessary to collect information on the targeted process, such as measurements of overall product quality; scrap rate and source of scrap; a routing of products; total product distance traveled; total square feet occupied by necessary equipment; number and frequency of changeovers; source of bottlenecks; amount of work-in-progress; and amount of staffing for specific tasks. Team members are assigned specific roles for research and analysis. As more information is gathered, team members add detail to value stream maps of the process and conduct time studies of relevant operations (e.g., takt time, lead-time).

Once data is gathered, it is analyzed and assessed to find areas for improvement. Team members identify and record all observed waste, by asking what the goal of the process is and whether each step or element adds value towards meeting this goal. Once waste, or non-value added activity, is identified and measured, team members then brainstorm improvement options. Ideas are often tested on the shopfloor or in process “mock-ups”. Ideas deemed most promising are selected and implemented. To fully realize the benefits of the kaizen event, team members should observe and record new cycle times, and calculate overall savings from eliminated waste, operator motion, part conveyance, square footage utilized, and throughput time.

Phase 3: Follow-up. A key part of a kaizen event is the follow-up activity that aims to ensure that improvements are sustained, and not just temporary. Following the kaizen event, team members routinely track key performance measures (i.e., metrics) to document the improvement gains. Metrics often include lead and cycle times, process defect rates, movement required, square footage utilized, although the metrics vary when the targeted process is an administrative process. Follow-up events are sometimes scheduled at 30 and 90-days following the initial kaizen event to assess performance and identify follow-up modifications that may be necessary to sustain the improvements. As part of this follow-up, personnel involved in the targeted process are tapped for feedback and suggestions. As discussed under the 5S method, visual feedback on process performance are often logged on scoreboards that are visible to all employees.

Top of page

Implications for Environmental Performance

Potential Benefits:
At its core, kaizen represents a process of continuous improvement that creates a sustained focus on eliminating all forms of waste from a targeted process. The resulting continual improvement culture and process is typically very similar to those sought under environmental management systems (EMS), ISO 14001, and pollution prevention programs. An advantage of kaizen is that it involves workers from multiple functions who may have a role in a given process, and strongly encourages them to participate in waste reduction activities. Workers close to a particular process often have suggestions and insights that can be tapped about ways to improve the process and reduce waste. Organizations have found, however, that it is often difficult to sustain employee involvement and commitment to continual improvement activities (e.g., P2, environmental management) that are not necessarily perceived to be directly related to core operations. In some cases, kaizen may provide a vehicle for engaging broad-based organizational participation in continual improvement activities that target, in part, physical wastes and environmental impacts.
Kaizen can be a powerful tool for uncovering hidden wastes or waste-generating activities and eliminating them.
Kaizen focuses on waste elimination activities that optimize existing processes and that can be accomplished quickly without significant capital investment. This creates a higher likelihood of quick, sustained results.
Potential Shortcomings:
Failure to involve environmental personnel in a quick kaizen event can potentially result in changes that do not satisfy applicable environmental regulatory requirements (e.g., waste handling requirements, permitting requirements). Care should be taken to consult with environmental staff regarding changes made to environmentally sensitive processes.
Failure to incorporate environmental considerations into kaizen can potentially result in solutions that do not consider inherent environmental risk associated with new processes. For example, an organization might select a change in process chemistry that addresses one improvement need (e.g., product quality, process cycle time) but that might be sub-optimal if the organization considered the material hazards or toxicity and the associated chemical and hazardous waste management obligations.
By not explicitly incorporating environmental considerations into kaizen, valuable pollution prevention and sustainability opportunities may be disregarded. For example, an evident opportunity to conserve water resources may not be explored if water use is not expensive and therefore not considered a wasteful expense that needs to be addressed. Similarly, including environmental considerations in the kaizen event goals can lead to solutions that rely less on hazardous materials or that create less hazardous wastes.

Useful Resources

Productivity Press Development Team. Kaizen for the Shopfloor (Portland, Oregon: Productivity Press, 2002).

Soltero, Conrad and Gregory Waldrip. “Using Kaizen to Reduce Waste and Prevent Pollution.” Environmental Quality Management (Spring 2002), 23-37.

Free Code to Create Cross-site Publishing Apps for SharePoint Online

Cross-site publishing is one of the powerful new capabilities in SharePoint 2013.  It enables the separation of data entry from display and breaks down the container barriers that have traditionally existed in SharePoint (ex: rolling up information across site collections). 

 IC648720[1]

Cross-site publishing is delivered through search and a number of new features, including list/library catalogs, catalog connections, and the content search web part.  Unfortunately, SharePoint Online/Office 365 doesn’t currently support these features.  Until they are added to the service (possibly in a quarterly update), customers will be looking for alternatives to close the gap.  In this post, I will outline several alternatives for delivering cross-site and search-driven content in SharePoint Online and how to template these views for reuse

I’m a huge proponent of SharePoint Online.  After visiting several Microsoft data centers, I feel confident that Microsoft is better positioned to run SharePoint infrastructure than almost any organization in the world.  SharePoint Online has very close feature parity to SharePoint on-premise, with the primary gaps existing in cross-site publishing and advanced business intelligence.  Although these capabilities have acceptable alternatives in the cloud (as will be outlined in this post), organizations looking to maximize the cloud might consider SharePoint running in IaaS for immediate access to these features.

 

Apps for SharePoint

The new SharePoint app model is fully supported in SharePoint Online and can be used to deliver customizations to SharePoint using any web technology.  New SharePoint APIs can be used with the app model to deliver an experience similar to cross-site publishing.  In fact, the content search web part could be re-written for delivery through the app model as an “App Part” for SharePoint Online. 
Although the app model provides great flexibility and reuse, it does come with some drawbacks.  Because an app part is delivered through a glorified IFRAME, it would be challenging to navigate to a new page from within the app part.  A link within the app would only navigate within the IFRAME (not the parent of the IFRAME).  Secondly, there isn’t a great mechanism for templating a site to automatically leverage an app part on its page(s).  Apps do not work with site templates, so a site that contains an app cannot be saved as a template.  Apps can be “stapled” to sites, but the app installed event (which would be needed to add the app part to a page) only fires when the app is installed into the app catalog.

REST APIs and Script Editor

The script editor web part is a powerful new tool that can help deliver flexible customization into SharePoint Online.  The script editor web part allows a block of client-side script to be added to any wiki or web part page in a site.  Combined with the new SharePoint REST APIs, the script editor web part can deliver mash-ups very similar to cross-site publishing and the content search web part.  Unlike apps for SharePoint, the script editor isn’t constrained by IFRAME containers, app permissions, or templating limitations.  In fact, a well-configured script editor web part could be exported and re-imported into the web part gallery for reuse.

Cross-site publishing leverages “catalogs” for precise querying of specific content.  Any List/Library can be designated as a catalog.  By making this designation, SharePoint will automatically create managed properties for columns of the List/Library and ultimately generate a search result source in sites that consume the catalog.  Although SharePoint Online doesn’t support catalogs, it support the building blocks such as managed properties and result sources.  These can be manually configured to provide the same precise querying in SharePoint Online and exploited in the script editor web part for display.

Calling Search REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “search/query?Querytext=’ContentType:News'”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

An easier approach might be to directly reference a list/library in the REST call of our client-side script.  This wouldn’t require manual search configuration and would provide real-time publishing (no waiting for new items to get indexed).  You could think of this approach similar to a content by query web part across site collections (possibly even farms) and the REST API makes it all possible!

List REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

The content search web part uses display templates to render search results in different arrangements (ex: list with images, image carousel, etc).  There are two types of display templates the content search web part leverages…the control template, which renders the container around the items, and the item template, which renders each individual item in the search results.  This is very similar to the way a Repeater control works in ASP.NET.  Display templates are authored using HTML, but are converted to client-side script automatically by SharePoint for rendering.  I mention this because our approach is very similar…we will leverage a container and then loop through and render items in script.  In fact, all the examples in this post were converted from display templates in a public site I’m working on. 

Item display template for content search web part

<!–#_
var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId() + “_ImageTitle_”);
var rem = index % 3;
var even = true;
if (rem == 1)
    even = false;

var pictureURL = $getItemValue(ctx, “Picture URL”);
var pictureId = encodedId + “picture”;
var pictureMarkup = Srch.ContentBySearch.getPictureMarkup(pictureURL, 140, 90, ctx.CurrentItem, “mtcImg140”, line1, pictureId);
var pictureLinkId = encodedId + “pictureLink”;
var pictureContainerId = encodedId + “pictureContainer”;
var dataContainerId = encodedId + “dataContainer”;
var dataContainerOverlayId = encodedId + “dataContainerOverlay”;
var line1LinkId = encodedId + “line1Link”;
var line1Id = encodedId + “line1”;
 _#–>
<div style=”width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;”>
   <a href=”_#= linkURL =#_”>
      <div style=”float: left; width: 140px; padding-right: 10px;”>
         <img src=”_#= pictureURL =#_” class=”mtcImg140″ style=”width: 140px;” />
      </div>
      <div style=”float: left; width: 170px”>
         <div class=”mtcProfileHeader mtcProfileHeaderP”>_#= line1 =#_</div>
      </div>
   </a>
</div>

 

Script equivalent

<div id=”divUnfeaturedNews”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //get the details for each item
                var listData = data.d.results;
                var itemCount = listData.length;
                var processedCount = 0;
                var ul = $(“<ul style=’list-style-type: none; padding-left: 0px;’ class=’cbs-List’>”);
                for (i = 0; i < listData.length; i++) {
                    $.ajax({
                        url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
                        type: “GET”,
                        headers: { “Accept”: “application/json;odata=verbose” },
                        success: function (data) {
                            processedCount++;
                            var htmlStr = “<li style=’display: inline;’><div style=’width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;’>”;
                            htmlStr += “<a href=’#’>”;
                            htmlStr += “<div style=’float: left; width: 140px; padding-right: 10px;’>”;
                            htmlStr += setImageWidth(data.d.PublishingRollupImage, ‘140’);
                            htmlStr += “</div>”;
                            htmlStr += “<div style=’float: left; width: 170px’>”;
                            htmlStr += “<div class=’mtcProfileHeader mtcProfileHeaderP’>” + data.d.Title + “</div>”;
                            htmlStr += “</div></a></div></li>”;
                            ul.append($(htmlStr))
                            if (processedCount == itemCount) {
                                $(“#divUnfeaturedNews”).append(ul);
                            }
                        },
                        error: function (data) {
                            alert(data.statusText);
                        }
                    });
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });

    function setImageWidth(imgString, width) {
        var img = $(imgString);
        img.css(‘width’, width);
        return img[0].outerHTML;
    }
</script>

 

Even one of the more complex carousel views from my site took less than 30min to convert to the script editor approach.

Advanced carousel script

<div id=”divFeaturedNews”>
    <div class=”mtc-Slideshow” id=”divSlideShow” style=”width: 610px;”>
        <div style=”width: 100%; float: left;”>
            <div id=”divSlideShowSection”>
                <div style=”width: 100%;”>
                    <div class=”mtc-SlideshowItems” id=”divSlideShowSectionContainer” style=”width: 610px; height: 275px; float: left; border-style: none; overflow: hidden; position: relative;”>
                        <div id=”divFeaturedNewsItemContainer”>
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 1&$top=4”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                var listData = data.d.results;
                for (i = 0; i < listData.length; i++) {
                    getItemDetails(listData, i, listData.length);
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });
    var processCount = 0;
    function getItemDetails(listData, i, count) {
        $.ajax({
            url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                processCount++;
                var itemHtml = “<div class=’mtcItems’ id=’divPic_” + i + “‘ style=’width: 610px; height: 275px; float: left; position: absolute; border-bottom: 1px dotted #ababab; z-index: 1; left: 0px;’>”
                itemHtml += “<div id=’container_” + i + “‘ style=’width: 610px; height: 275px; float: left;’>”;
                itemHtml += “<a href=’#’ title='” + data.d.Caption_x005f_x0020_x005f_Title + “‘ style=’width: 610px; height: 275px;’>”;
                itemHtml += data.d.Feature_x005f_x0020_x005f_Image;
                itemHtml += “</a></div></div>”;
                itemHtml += “<div class=’titleContainerClass’ id=’divTitle_” + i + “‘ data-originalidx='” + i + “‘ data-currentidx='” + i + “‘ style=’height: 25px; z-index: 2; position: absolute; background-color: rgba(255, 255, 255, 0.8); cursor: pointer; padding-right: 10px; margin: 0px; padding-left: 10px; margin-top: 4px; color: #000; font-size: 18px;’ onclick=’changeSlide(this);’>”;
                itemHtml += data.d.Caption_x005f_x0020_x005f_Title;
                itemHtml += “<span id=’currentSpan_” + i + “‘ style=’display: none; font-size: 16px;’>” + data.d.Caption_x005f_x0020_x005f_Body + “</span></div>”;
                $(‘#divFeaturedNewsItemContainer’).append(itemHtml);

                if (processCount == count) {
                    allItemsLoaded();
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    }
    window.mtc_init = function (controlDiv) {
        var slideItems = controlDiv.children;
        for (var i = 0; i < slideItems.length; i++) {
            if (i > 0) {
                slideItems[i].style.left = ‘610px’;
            }
        };
    };

    function allItemsLoaded() {
        var slideshows = document.querySelectorAll(“.mtc-SlideshowItems”);
        for (var i = 0; i < slideshows.length; i++) {
            mtc_init(slideshows[i].children[0]);
        }

        var div = $(‘#divTitle_0’);
        cssTitle(div, true);
        var top = 160;
        for (i = 1; i < 4; i++) {
            var divx = $(‘#divTitle_’ + i);
            cssTitle(divx, false);
            divx.css(‘top’, top);
            top += 35;
        }
    }

    function cssTitle(div, selected) {
        if (selected) {
            div.css(‘height’, ‘auto’);
            div.css(‘width’, ‘300px’);
            div.css(‘top’, ’10px’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’26px’);
            div.css(‘padding-top’, ‘5px’);
            div.css(‘padding-bottom’, ‘5px’);
            div.find(‘span’).css(‘display’, ‘block’);
        }
        else {
            div.css(‘height’, ’25px’);
            div.css(‘width’, ‘auto’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’18px’);
            div.css(‘padding-top’, ‘0px’);
            div.css(‘padding-bottom’, ‘0px’);
            div.find(‘span’).css(‘display’, ‘none’);
        }
    }

    window.changeSlide = function (item) {
        //get all title containers
        var listItems = document.querySelectorAll(‘.titleContainerClass’);
        var currentIndexVals = { 0: null, 1: null, 2: null, 3: null };
        var newIndexVals = { 0: null, 1: null, 2: null, 3: null };

        for (var i = 0; i < listItems.length; i++) {
            //current Index
            currentIndexVals[i] = parseInt(listItems[i].getAttribute(‘data-currentidx’));
        }

        var selectedIndex = 0; //selected Index will always be 0
        var leftOffset = ”;
        var originalSelectedIndex = ”;

        var nextSelected = ”;
        var originalNextIndex = ”;

        if (item == null) {
            var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
            originalSelectedIndex = parseInt(item0.getAttribute(‘data-originalidx’));
            originalNextIndex = originalSelectedIndex + 1;
            nextSelected = currentIndexVals[0] + 1;
        }
        else {
            nextSelected = item.getAttribute(‘data-currentidx’);
            originalNextIndex = item.getAttribute(‘data-originalidx’);
        }

        if (nextSelected == 0) { return; }

        for (i = 0; i < listItems.length; i++) {
            if (currentIndexVals[i] == selectedIndex) {
                //this is the selected item, so move to bottom and animate
                var div = $(‘[data-currentidx=”0″]’);
                cssTitle(div, false);
                div.css(‘left’, ‘-400px’);
                div.css(‘top’, ‘230px’);

                newIndexVals[i] = 3;
                var item0 = document.querySelector(‘[data-currentidx=”0″]’);
                originalSelectedIndex = item0.getAttribute(‘data-originalidx’);

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else if (currentIndexVals[i] == nextSelected) {
                //this is the NEW selected item, so resize and slide in as selected
                var div = $(‘[data-currentidx=”‘ + nextSelected + ‘”]’);
                cssTitle(div, true);
                div.css(‘left’, ‘-610px’);

                newIndexVals[i] = 0;

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else {
                //move up in queue
                var curIdx = currentIndexVals[i];
                var div = $(‘[data-currentidx=”‘ + curIdx + ‘”]’);

                var topStr = div.css(‘top’);
                var topInt = parseInt(topStr.substring(0, topStr.length – 1));

                if (curIdx != 1 && nextSelected == 1 || curIdx > nextSelected) {
                    topInt = topInt – 35;
                    if (curIdx – 1 == 2) { newIndexVals[i] = 2 };
                    if (curIdx – 1 == 1) { newIndexVals[i] = 1 };
                }

                //move up
                div.animate(
                    { top: topInt }, 500, function () {
                    });
            }
        };

        if (originalNextIndex < 0)
            originalNextIndex = itemCount – 1;

        //adjust pictures
        $(‘#divPic_’ + originalNextIndex).css(‘left’, ‘610px’);
        leftOffset = ‘-610px’;

        $(‘#divPic_’ + originalSelectedIndex).animate(
            { left: leftOffset }, 500, function () {
            });

        $(‘#divPic_’ + originalNextIndex).animate(
            { left: ‘0px’ }, 500, function () {
            });

        var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
        var item1 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[1] + ‘”]’);
        var item2 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[2] + ‘”]’);
        var item3 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[3] + ‘”]’);
        if (newIndexVals[0] != null) { item0.setAttribute(‘data-currentidx’, newIndexVals[0]) };
        if (newIndexVals[1] != null) { item1.setAttribute(‘data-currentidx’, newIndexVals[1]) };
        if (newIndexVals[2] != null) { item2.setAttribute(‘data-currentidx’, newIndexVals[2]) };
        if (newIndexVals[3] != null) { item3.setAttribute(‘data-currentidx’, newIndexVals[3]) };
    };
</script>

 

End-result of script editors in SharePoint Online

Separate authoring site collection

Final Thoughts

Why most recruiters suck and what you can do about it

While you might think that recruiters hold the keys to your next job, in reality they cannot function without great candidates. You are seeking a partnership not a one-nighter where all is forgotten the next day with the exception of the very bad taste in your mouth. Great recruiters want the very same thing – a very long relationship where one hand washes the other. But to make it happen you have to be knowledgeable and active; you have to control the interview and not become a pawn used by a crappy recruiter with an equally crappy conscience.

So stop whining about crappy recruiters and do something about them.!!

A call to Arms to ALL South African Developers – Boycott CRAPPY recruiters!!

The Recruiting Inferno

Jason Buss posted a barn burner over on TalentHQ, “Are All Heads of HR and HR Departments Filled With Idiots?” – his facts are spot on and they should make everyone a wee bit angry. I’m a performance person when it comes to recruiting – I’m going to ask you about solving problems and I could care less if you’re pregnant, have a vacation scheduled, or are skilled at brown nosing. If you can solve the problems I’m hiring you to solve you’ll be moved on; if you can’t, you won’t.

What follows is a post I wrote anonymously for “someone else” – somewhat tongue-in-cheek but with a great deal of fact behind it (Jason – you know how the spies are going to drool over this post!) – and with some advice on how job seekers can counteract the actions of some recruiters and level the playing field. Let…

View original post 1,545 more words

How To : Use Containers effectively for your ALM , CI and DevOps methodologies

In his seminal article “Continuous Integration,” Martin Fowler defines a set of practices to improve the quality and increase the speed of the software development process. These practices include having a fast and fully automated build all the way from development to production and conformity between testing and production environments.

Since Fowler’s article was published, continuous integration has become one of the key practices of modern agile development, and many of us are in a constant battle to speed up the build process and test automation stages. The growing complexity of software and our aspiration to deliver it to the customer in a matter of days or even hours doesn’t make this battle any easier.

The recent rise of containers as a tool to ease the journey from development to production may help us address these challenges.

Containers (OS Level Virtualization)
Containers allow us to create multiple isolated and secure environments within a single instance of an operating system. As opposed to virtual machines (VMs), containers do not launch a separate OS but instead share the host kernel while maintaining the isolation of resources and processes where required.

This architectural difference leads to the drastic reduction in the overheads related to starting and running instances. As a result, the startup time can commonly be reduced from thirty plus seconds to 0.1 seconds. The number of containers running on a typical server can easily reach dozens or even hundreds while the same server would struggle to support ten to fifteen VMs.

The following article written by David Strauss provides an excellent explanation about containers: “Containers—Not Virtual Machines—Are the Future Cloud.

Deployment Pipelines
I started building deployment pipelines about a decade ago when I moved from a software development role into configuration management. 

At my first job I reduced the build time from a week to around two to three hours, including the creation of a VM image and the deployment of a full system on a hypervisor. Over 50 percent of the build time was wasted on the creation and deployment of the VM while the build and the automated system testing required less than an hour.

Throughout the years virtualization technology took a few leaps forward and now we can deploy a fully functional multi VM system on a private or a public cloud in just few minutes. Although this represents a huge amount of progress, it still creates enormous difficulties when trying to create a real continuous deployment pipeline. Having a fast build means keeping the continuous integration build under ten minutes as suggested by Martin Fowler. Achieving this speed normally means confining the testing to running a suite of unit tests; there just isn’t the time to build a full image, copy the image over the network, deploy the VMs and run a set of system tests.

But what if we could build a new image in just a few seconds, copy to the cloud only the changed pieces of the software and boot up a fully functional system in under one second?

As unimaginable as it sounds, it is already possible using containers. In addition to being able to deploy and test a full system at the continuous integration stage, containers can also help to reduce the complexity related to supporting variations in operating systems. The same container image created during a normal build can be reused in production environments to remove any potential differences between the operating systems on the developer’s laptops and those on the production servers.

The same functionality is theoretically possible using standard VMs, but in practice it never really works. The overhead of maintaining huge VM images, distributing them and re-deploying them multiple times per day has prevented this way of working from becoming popular.

 

Test Automation
The more efficient usage of resources and almost instant startup times provided by containers is hugely important for super-fast continuous integration builds, but there are even bigger benefits for large test execution scenarios.

Today, the redeployment of a full operating system for the each test automation suite is out of the question, therefore the same system is used to execute hundreds or even thousands of test cases in a quick succession. This requires very sophisticated teardown procedures between the execution of the test suites, which is often a source of errors. Such errors may cause a butterfly effect, leading to an unexpected failure in an unrelated part of the test execution. In addition, teardown procedures are expensive to write and maintain and may take significant time to execute.

With containers, we can eliminate teardown procedures entirely. Every single test automation suite can run in a separate disposable container, which can simply be thrown away after execution. A notable side effect of this is that we can now run all the tests in fully isolated containers in parallel and on any hosting environment that supports containers. This can potentially reduce test automation time from hours to minutes or even seconds depending on the resources at your disposal.

DevOps and Continuous Delivery
The DevOps movement is increasingly popular and aims to bridge the communication gap between development and operations. I see the creation of a common technical language as one of the biggest challenges for DevOps, since these two departments have typically been trying to solve different problems. Although containers will not fix or resolve this challenge, they may significantly simplify the situation for both parties and create an environment, which is easily accessible for both programmers and IT operators.

The standardization of tooling in both departments will allow software to move faster through the pipeline and reduce the complexity required at each stage. By doing that we can take another step towards faster feedback loops with continuous delivery.

In Conclusion
Modern software development relies on the extensive reuse and integration of existing software components. This makes setting up development and production environments especially challenging.

Containers have been here for a while, but only recently have they become reliable and usable enough for daily operations. They will very soon allow us to remove the burden of running thousands of unneeded operating system instances and focus on the real services we provide to our customers.

Containers are making continuous deployment look like an achievable goal.

 

How To : Use Git Tools for TFS Integration

Git – TFS Integration – Why it matters

 taking-your-version-control-to-a-next-level-with-tfs-and-git-1-638[1]

For many small development shops, the idea of using TFS and a centralized source control repository is anathema. The mere thought of being restricted by a software configuration manager on how and when to branch or merge cuts against everything they cherish in software development.

 

Git is their natural and chosen ground for managing source code. The freedom and flexibility of using Git enables them to work where they are. This is especially true if they are working as part of a distributed team on modular projects.

Microsoft addressed many of the existing concerns with TFS source control with the advent of TFS 2012 and local workspaces. However, even though local workspaces enable great flexibility in offline work, they are still ultimately tied to a central repository and the policies and restrictions imposed on it.

 

Enter Git support in TFS. Git support currently comes in two forms; stand alone Git support in Visual Studio and Git support with TFS.

Git support with Visual Studio is completely straightforward. Simply change the source control plug-in selection to the Microsoft Git Provider and all the power and flexibility of Git is available to the Visual Studio developer such as private branches and online collaboration with Git hosts such as GitHub and BitBucket.

Source

Configuring Git for Visual Studio Source Control

However, from an ALM perspective, the real power and the compelling feature of Microsoft’s integration with Git is the ability to work with TFS.

 

Developers still get all the advantages and flexibility of Git, but can also take advantage of the ALM features of TFS such as work item tracking, team tools and integrated build. The Git – TFS integration gets us much closer to the ultimate goal of true cross-platform support in a single ALM toolset.

 

The TFS – Git integration can be utilized a couple of ways. The first option is the ability to essentially synchronize a Git repository with TFS source control with the Git-TF utility. This utility makes it easy to clone sources from TFS, fetch updates from TFS and push changes back to TFS.

 

What’s more, it fully supports TFS shelvesets and work item integration, which presents some exciting possibilities. The features and functionality Git-TF provides makes it a compelling solution and a credible compromise between centrally managed teams with source control and distributed teams with distributed source control.

The second option, available now only through Microsoft’s hosted TFS Service, is the ability for organizations to create TFS Team Projects with Git hosted source control (this ability is reportedly planned for on premise TFS support in the next release). This is a fairly exciting development.

 

Having the choice between native TFS version control and Git when creating a team project opens many doors that hitherto were locked shut.

remoteRepo1

XCode IDE connected to a TFS hosted repository

Eclipse, XCode, Visual Studio and any other IDE that supports Git can now be used to leverage the powerful ALM features TFS provides.

As an ALM consultant, that’s the part that excites me the most. Hosting all development efforts in a single environment; an environment that supports all the various technologies in play and being able to track and manage those efforts with agility and transparency is a huge benefit to any organization that provides multiple platform solutions.

 

Even those who don’t, will now have the option to at least evaluate the feasibility of utilizing TFS in development environments not typically associated with a Microsoft project.

The mythical promised land of cross-platform ALM may have just become quite less mythical.

 

———————Microsoft’s Tool for Git and TFS Integration – ———————————————————————–

 http://gittf.codeplex.com

2352.vs_5F00_heart_5F00_git_5F00_thumb_5F00_66558A9C[1]

Working with Teams

The Git-TF tool is most easily used by a single developer or multiple developers working independently with their own isolated Git repos. That is, each developer uses Git-TF to clone a local repo where they can then use Git to manage their local development that will eventually be checked in to TFS. In this “hub and spoke” configuration, all code is shared through TFS at the “hub” and each developer using Git becomes a “spoke”. Developers looking to collaborate using Git’s distributed sharing capabilities will want to work in a specific configuration described below.

Most often, developers collaborating with Git have cloned from a common repo. When it comes time to share divergent changes, conflict resolution is easy because each repository shares the same common base version. Many times, conflicts are automatically resolved. One of the keys to this merging of histories is that each commit is assigned a unique identifier that is generated by the contents of the commit. When working with Git-TF, two repositories cloned from the same TFS path will not have the same commit IDs unless the clones were done at the same point in TFS history, and with the same depth. In the event that two Git repos that were independently cloned using Git-TF share changes directly, the result will be a baseless merge of the repositories and a large number of conflicts. For this reason, it is not recommended that teams using Git-TF ever share changes directly through Git (i.e. using git push and git pull).

Instead, it is recommended that a team working with Git-TF and collaborating with Git do so by designating a single repo as the point of contact with TFS. This configuration may look as follows for a team of three developers:

          [TFS]      [Shared Git repo]
            |         ^ (2)  |       \
            |        /       |        \
            |       /        |         \
            V (1)  /         V (3)      V (4)
       [Alice's Repo]   [Bob's Repo]   [Charlie's Repo]
 

In the configuration above the actions would be as follows:

  1. Using the git tf clone command, Alice clones a path from TFS into a local Git repo.
  2. Next, Alice uses git push to push the commit created in her local Git repo into the team’s shared Git repo.
  3. Bob can then use git clone to clone down the changes that Alice pushed.
  4. Charlie can also use git clone to clone down the changes that Alice pushed.

Both Bob and Charlie only ever interact with the team’s shared Git repo using git push and git pull. They can also interact directly with one another’s repos (or with Alice’s) , but should never use Git-TF commands to interact with TFVC.

When working with the team, Alice will typically develop locally and use git push and git pull to share changes with the team. When the team decides they have changes to share with TFS, Alice will use a git tf checkin to share those changes (typically a git tf checkin –shallow will be used). Likewise, if there are changes that the team needs from TFVC, Alice will perform a git tf pull, using the –merge or –rebase options as appropriate, and then use git push to share the changes with the team.

Note that (until Issue 77 is addressed) all changes coming into the TFVC repository will come in as if from Alice’s TFS identity. This is fine if only Alice has an identity on that TFVC project but it may well not be what you want if Bob and Charlie also had valid identities in that TFS project.

Rebase vs. Merge

Once changes have been fetched from TFS using git tf pull (or git tf fetch), those changes must either be merged with the HEAD or have any changes since the last fetch rebased on top of FETCH_HEAD. Git-TF allows developers to work in either manner, though if the repo that is sharing changes with TFS has shared any commits with other Git users, then this rebase may result in significant conflicts (see The Perils of Rebasing). For this reason, it is recommended that any team working in the aforementioned team configuration use git tf pull with the default –merge option (or use git merge FETCH_HEAD to incorporate changes made in TFS after fetching manually).

Recommended Git Settings

When using the Git-TF tools, there are a few recommended settings that should make it easier to work with other developers that are using TFS.

Line Endings

core.autocrlf = false

Git has a feature to allow line endings to be normalized for a repository, and it provides options for how those line endings should be set when files are checked out. TFS does not have any feature to normalize line endings – it stores exactly what is checked in by the user. When using Git-TF, choosing to normalize line endings to Unix-style line endings (LF) will likely result in TFS users (especially those using VS) changing the line endings back to Windows-style line endings (CRLF). As a result, it is recommended to set the core.autocrlf option to false, which will keep line endings unchanged in the Git repo.

Ignore case

core.ignorecase = true

TFS does not allow multiple files that differ only in case to exist in the same folder at the same time. Git users working on non-Windows machines could commit files to their repo that differ only in case, and attempting to check in those changes to TFS will result in an error. To avoid these types of errors, the core.ignorecase option should be set to true.

How To : Create, Edit and Maintaining a Coded UI Test for Silverlight Application

Using the Microsoft Visual Studio 2013 Coded UI Test plugin for Silverlight, you can create Coded UI Tests or action recordings for Silverlight 5.0 applications.

Using Microsoft Microsoft Visual Studio 2010 Feature Pack 2, you can create coded UI tests or action recordings for Silverlight 4 applications. Action recordings let you fast forward through steps in a manual test. For more information about action recordings or coded UI tests, see How to: Create an Action Recording or How to: Create a Coded UI Test.

In this walkthrough, you will learn the procedures that are required to test a Silverlight control in a Silverlight based application. The walkthrough takes you through the following procedures:

Prerequisites

 

For this walkthrough you will need:

To prepare the walkthrough

  1. Verify that you have the Silverlight 4 developer runtime available at Silverlight Developer 4 for Developers.

  2. Verify that you have completed the procedures in Walkthrough: Creating a RIA Services Solution.

    The result will be a simple Silverlight application that uses a Silverlight grid control. Later, you will use the grid control in this walkthrough and perform coded UI tests on it.

  3.  

     

    For more information about supported and unsupported Silverlight controls, see How to: Set Up Your Silverlight Application for Testing.

  4. With the RIAServicesExample you created in Walkthrough: Creating a RIA Services Solution running, copy the address of the Web application to the clipboard or a notepad file. For example, the address might resemble this: http://localhost: <port number>/RIAServicesExampleTestPage.aspx.

Add the SilverlightUIAutomationHelper.dll to Your Silverlight 4 Project

 

To test your Silverlight applications, you must add Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper.dll as a reference to your Silverlight 4 application so that the Silverlight controls can be identified. This helper assembly instruments your Silverlight application to enable the information about a control to be available to the Silverlight plugin API that you use in your coded UI test or is used for an action recording.This assembly cannot be redistributed. Therefore, you must add this reference conditionally when you want to build the application. By taking this approach the assembly is not redistributed when you deploy your software to a customer.

To add the SilverlightUIAutomationHelper.dll

  1. For each Silverlight project in your solution that you want to test, you must add the SilverlightUIAutomationHelper.dll. In Solution Explorer, right-click the RIAServicesExample project, select Unload Project.

    The project is displayed in Solution Explorer as RIAServicesExample (unavailable).

  2. Right-click the project again and then click Edit RIAServicesExample.csproj.

    The RIAServicesExample.csproj file is opened in the Code Editor. You will see <PropertyGroup> nodes followed by <ItemGroup> nodes. You must make the following two modifications:

    1. To set the production condition, add the following entry to the first <PropertyGroup> node:

       
      <Production Condition="'$(Production)'==''">False</Production>
      
    2. To add the DLL when the build is not a production build, insert the following <Choose> node after the <PropertyGroup> nodes, but before the <ItemGroup> nodes:

       
      <Choose>
         <When Condition=" '$(Production)'=='False' ">
               <ItemGroup>
                 <Reference Include="Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper">
                 </Reference>
               </ItemGroup>
             </When>
       </Choose>
      
  3. To save the file, click Save.

  4. To reload these changes, right-click the server project and then click Reload Project

    Caution noteCaution

    If you have multiple Silverlight projects that you want to test, you must follow these steps for each project.

    Important noteImportant

    To remove the SilverlightUIAutomationHelper.dll so that it is not redistributed with your production code, set the production condition value to true in the first <PropertyGroup> node. In in this manner, the DLL is no longer added as a reference by the Choose node that you added to the project in the previous procedure. You can also set an environment variable named Production to the value True. Then you can use msbuild to build the Silverlight project and remove the SilverlightUIAutomationHelper.dll.

Create a Coded UI Test for RIAServicesExample Silverlight Application

 

To Create a Coded UI Test

  1. In Solution Explorer, right-click the solution, click Add and then select New Project.

    The Add New Project dialog box appears.

  2. In the Installed Templates pane, expand either Visual C# or Visual Basic, and then select Test.

  3. In the middle pane, select the Test Project template.

  4. Click OK.

    In Solution Explorer, the new test project named TestProject1 is added to your solution. Either the UnitTest1.cs or UnitTest1.vb file appears in the Code Editor. You can close the UnitTest1 file because it is not used in this walkthrough.

  5. In Solution Explorer, right-click TestProject1, click Add and then select Coded UI test.

    The Generate Code for Coded UI Test dialog box appears.

  6. Select the Record actions, edit UI map or add assertions option and then click OK.

    The UIMap – Coded UI Test Builder appears.

    For more information about the options in the dialog box, see How to: Create a Coded UI Test.

  7. Click Start Recording on the UIMap – Coded UI Test Builder. In several seconds, the Coded UI Test Builder will be ready.

    Start recording UI

  8. Launch Internet Explorer.

  9. In Internet Explorer’s address bar, enter the address of the Web application that you copied in a previous procedure. For example:

    http://localhost: <port number>/RIAServicesExampleTestPage.aspx

  10. Click one or two of the column headers to sort the data.

  11. Close Internet Explorer.

  12. On the UIMap – Coded UI Test Builder, click Generate Code.

  13. In the Method Name type SimpleSilverlightAppTest and then click Add and Generate. In several seconds, the Coded UI test appears and is added to the Solution.

  14. Close the UIMap – Coded UI Test Builder.

    The CodedUITest1.cs file appears in the Code Editor.

    NoteNote

    You can assign a unique automation property based on the type of Silverlight control in your application. For more information, see Set a Unique Automation Property for Silverlight Controls for Testing.

Run the Coded UI Test on the RIAServicesExample Silverlight Application

 

To run the coded UI test

  • On the Test menu, select Windows and then click Test View.In Test View, select CodedUITestMethod1 under the Test Name column and then click Run Selection in the toolbar.

    The coded UI test should successfully run using the Silverlight data grid control.

How to connect a SharePoint 2013 Document Library to Outlook 2013

 

How to connect a SharePoint 2013 Document Library to Outlook 2013One of the key methods of gaining User Adoption of SharePoint is ensuring and pushing the integration it has with Microsoft Office to information workers. After all, information workers generally use Outlook as their ‘mother-ship’. Getting those users to switch immediately to SharePoint or, asking them to visit a document library in a SharePoint site which they will need to access could take time, especially since it means opening a browser, navigating to the site, covering their beloved Outlook client in the process.

 

The following describes how to connect a typical SharePoint 2013 document library to Outlook 2013 client.

  1. Access your SharePoint site; go into the relevant Documents library. In the below example, I clicked on the default Team Site Documents repository link in the Quick Launch bar, which has around 140 documents.

 

  1. Ok, that’s the Document library displayed, now to get to the Library Tab on the Ribbon bar; the option we are looking for is within the Library options available there.

  1. When the Library ribbon is displayed, click the Connect To Outlook button in the Connect & Export section. Note. If Connect to Outlook is greyed out ensure that Outlook 2013 is fully operational. I’ve come across examples where Outlook is installed, but no email account has been enabled in Outlook – if that’s the case this button will be greyed out.

  1. Once the Connect To Outlook button is clicked, you may receive a warning message informing you that you allow SharePoint to connect with Outlook – Click ALLOW.

  1. Outlook 2013 will be displayed. A dialog will also then be displayed that asks you to confirm that you wish to connect the Document Library to Outlook. The below dialog shows the Site Name and Document Library title, along with the URL of the document library being connected. Below, and to the right is a button that shows more information about the connection (ADVANCED button). The following screenshot shows the information displayed if the Advanced Button is clicked. There is not much you can do on that screen, for now, click YES to confirm the connection.

Here’s an example of the Advanced dialog associated. The most interesting aspect is the Permissions line. For Document Library connection to Outlook, this will be set as READ. This is by design, and for good reason. Things like classified metadata are not exposed to be writeable from Outlook including other document library settings like CheckIn/Out etc. However, this does not prevent you from modifying a file in the resultant list. If the document needs to be updated, simply double-click on the document which will open it in the local application, click the edit offline option, make your changes, click save, click close, and then a prompt should appear to allow you to update to the server.

Once completed, the documents will be listed in Outlook. The following screenshot shows the result of a Shared Document Library from a SharePoint 2013 site connected to Outlook 2013. Note the following features which in my view are awesome for User Adoption particularly from those whose centre of the universe happens to be the Outlook client; without going into jargon try to explain the following features:

  • That users are able to switch from connected library to connected library using the navigation options, each connected library shows the number of unread items (un-previewed or un-opened documents).
  • That each document (if the previewer is available) when clicked on will display a preview of the document; meaning that you can read a Word Document, for example, without having to open it in the client application.
  • That information concerning the state of the document is displayed, showing the last modifier, whether the document is checked out, when it was last modified and the document size.

 

Note. There is a problem I have noticed in the preview section when highlighting any file whilst working with SharePoint 2013, Office 2013 on a sandbox; the message:

‘This file cannot be previewed because of an error with the following previewer: Microsoft xxxxx previewer – To open this file in its own program, double-click it;’.

There is an article that seems to describe the issue (but does not directly mention when it’s likely to occur); and is known to Microsoft. A description of the alternatives whilst a fix is being provided here: http://support.microsoft.com/kb/983097. I will further investigate this and update this article.

A Look at : eDiscovery in SharePoint On-Premise and Online

One of the great features that came in SharePoint 2010 is the tool for eDiscovery.  

What is eDiscovery?  

The process of collecting and analyzing content related to some litigation or an official request.  It’s a big deal and this post discusses preparations you need to make in SharePoint to be ready.

IC630180[1]

eDiscovery 2013

If you’re an IT manager or a developer and someone comes to you and says, “We need eDiscovery in SharePoint,” run for the hills!  

This is something that cannot be done alone.  eDiscovery requires specific input from lawyers and records managers.

 

Who should care about eDiscovery?

Every company needs to have some conversation about eDiscovery because, let’s face it, litigation happens.  You have no choice on the matter if you are a public company or in a regulated industry.

 

Even if you are not public or regulated, you should consider eDiscovery for the very simple reason that IF litigation happens, it will cost you exponentially more to deal with a lawsuit than to implement an eDiscovery policy. Luckily, the discovery process is easy with SharePoint 2010.

 

eDiscovery in SharePoint is basically Search+.  Search + records management and hold tools.  A “hold” is what happens when you run eDiscovery on a particular topic.  When you “hold” content associated with that topic, you are locking down all related documents.  “Hold in-place” will hold a document in the library it was found so users will see it, but will be unable to edit.  “Hold and move” (recommended) is when a document is moved from it’s original location to a holding location.

 

Remember!  SharePoint is a platform not a magical problem solver.  You MUST first have an eDiscovery policy.  For example, your policy can state that the eDiscovery process is to first issue a memo, do the hold, and then notify when the hold has expired…or something along those lines.

 

You MUST consult with your lawyer on how to manage a “hold”.  For some companies it is deemed unacceptable to have a document permanently locked.  The policy must also include WHO can run eDiscovery.  In most companies, a records manager or a lawyer are the only people with the rights to run eDiscovery.  

 

But, there is a caveat.  eDiscovery requires site collection administrator access at minimum and this can’t be given to just anyone. There is a work around to this, but it’s a bit of a trick.  There is a hidden list that manages who has access to eDiscovery features, and you can add individuals, such as a records manager, to that list without giving them administrator access.

 

Despite the ambiguity and these loopholes, eDiscovery is useful. Lawyers now in the face of litigation are using the eDiscovery strategy to validate the system of record, in this case SharePoint, and by proving the system, processes, policy, and content that come from that system.

 

What does all of this mean?  SharePoint has great features for eDiscovery, but it’s just a tool to help with one part of the process.  You must plan.  As with all things SharePoint, projects do not fail because of technology but instead they fail because of people and lack of planning.  

 

This is the same with eDiscovery.  A plan is the most important step.  An eDiscovery policy should be a part of the information architecture and security plan in all SharePoint deployment companies, small or large.

 

In the next post, we will be looking at how to setup eDiscovery in SharePoint Online

 

hero-for-hire_basic-layout_600

My CV is downloadable from this blog – I am currently in the job market.

Have a great week!!

 

SharePoint Samurai

tomas.floyd@outlook.com

A Look At : SharePoint 2013 Site Templates

hero-for-hire_basic-layout_600
SharePoint 2013 offers a vast variety of out-of-the-box site templates. One of the success factors of your SharePoint deployment is choosing the most suitable site template that meets your business needs.

I’ve been asked many times which site template can serve particular required needs and what differs one template from another, so I decided to write a quick overview of all the available SharePoint 2013 site templates and their common uses.

Collaboration Site Templates

  • Team Site – The most common SharePoint site template, mainly used by teams to collaborate, organize, create, and share information and documents.

  • Blog – a site on which a user or group of users write opinions and share information.

  • Developer Site – this site template is focused on Apps for Office development. Developers can build, test and publish their apps here.

  • Project Site – this site template is used for managing and collaborating on a project. Project site coordinates project status and all additional information relevant to the project.

  • Community Site – a site where the community members can explore, discover content and discuss common topics.

 

Enterprise Site Templates

  • Document Center – this site is used to centrally manage documents in your enterprise.

  • eDiscovery Center – this site is used to manage, search and export content for investigations matters.

  • Records Center – this site is used to submit and find important documents that should be stored for long-term archival.

  • Business Intelligence Center – this site is used for providing access to Business Intelligence content in SharePoint.

  • Enterprise Search Center – this site delivers an enterprise search experience.  Users can access the enterprise search center to perform general searches, people searches, conversation or video searches, all in one place. You can easily customize search results pages.

  • My Site Host – this site is used for hosting public profile pages and personal sites. This site can be available after configuration of the User Profile Service Application.

  • Community Portal – this site is used for discovering new communities across the enterprise.

  • Basic Search Center – this site is delivering the basic search experience.

  • Visio Process Repository – this site allows you sharing and viewing Visio process diagrams.

Publishing Site Templates

  • Publishing Portal – this site template is used for an internet-facing sites or a large intranet portals.

  • Enterprise Wiki – this site is used for publishing knowledge that you want to share across the enterprise.

  • Product Catalog – this site is used for managing product catalogs.

If none of those SharePoint site templates meets your needs you can always create custom templates.

 

This will be the focus of a future blog post as I am busy finishing a FREE Custom Knowledge Base Site Template

Some of the features will include :

  • Creating an ALM web and site template, setup life cycle management and deployment
  • Advanced functionality using Managed Metadata and BCS
  • Document Conversion using Word Automation Services
  • Using the search to build out our feature functionality
  • An Office 365 and SharePoint Online version

 

8 Laws of Software Installation

2014-04-17_1958[1]

Clone this wiki locally

https://github.com/OneGet/oneget.wiki.git

 

Establishing an ecosystem that works together.

I started thinking about how all of this fits together and how we (as an ecosystem) need to be able to work together–and more importantly–still allow different systems to work how they please.

Many years ago, [url:Kim Cameron|http://www.identityblog.com/] came up with a list of [url:”7 Laws of Identity”|http://www.identityblog.com/?p=352/]. They outline some core fundamental principles that any Identity system should follow to ensure that everyone’s (users, identity providers, and relying parties) security is maximized.

It occurred to me, that concepts from the Laws could be recycled in a way that reflects how we can define the general parameters for an installation ecosystem:

1. USER CONTROL AND CONSENT

Users must always be able to make the ultimate decisions about their system, and installers must never do unauthorized actions without the user’s consent. Essentially, we really want to ensure that changes that the user doesn’t want aren’t being applied to their systems. This means that the that installers should always provide a clear and accurate description of the product being installed, and ensure that the user is in control of their systems. User interfaces or tools that obscure or break this trust with the user should be avoided. Ideally, user interfaces should strive for some amount of minimalism, not be serving up a collection of pedantic screens which users tediously press ‘next’ thru. Less UI means that users are far more likely to pay attention to what’s said.

  • Personal Opinion: I guess at the same time, I should point out a particular gripe of mine, especially with open source software installation on Windows. The proliferation of EULAs and Licenses masquerading as EULAs in the installation process should stop. Many OSS licenses don’t actually have any requirement upon the end-user to agree to the terms of them before installation, so please stop asking for people to ‘agree’ just to make it look like you have a ‘professional’ installer.

  • If you actually have a requirement to record an acceptance of license, perhaps you should be doing that upon first use (or whatever activity actually requires the acceptance of the license)

2. MINIMAL IMPACT FOR A CONSTRAINED USE

Changes to a system should aim to offer the least amount of disruption to the system. Installing unnecessary or unwanted components adds to bloat, and will increase the potential attack surface for malware.

  • Personal Opinion: There is a category of software out there that has opted to provide their software free, but heavily–and often with great vigilance–attempts to install toolbars, add-ins, or other pieces of trash software that serve only to funnel advertising to the user. Others nag the user to change their default search settings, or their browser home page for similar purposes. These behaviors are abusive to customers, and should be avoided at all costs.

3. PLURALISM OF OPERATORS AND TECHNOLOGIES

The ecosystem should easily support many different technologies, there is no one-size-fits-all answer. Software comes in all shapes and sizes. Any well-behaved individual packaging or installation technology should be welcome to participate. Choosing one technology over another should be left to the publisher. Pushing this to the logical ends means that any attempt to unify these should permit and encourage use of any part of the ecosystem.

4. TRANSPARENCY, ACCOUNTABILITY, AND REVERSABILITY

Installation technologies should never obfuscate what is being done, should never place the system in a state that can’t be undone. Again, keeping in mind that the target system belongs to the user, not the publisher, end users should be able to expect that un-installation should remove without issue or require any additional work to clean up.

  • Personal Opinion: On a slightly tangential note, I’d like to talk about rebooting the system. Windows Installers seem to be overly-eager to reboot the OS, either on installation or uninstallation. Now look–there is a very small class of software that can actually justify having to reboot the system. 99%+ of software should be able to deal with file conflicts, proper setup, manage their running processes or services, manipulating locked files, remove their temporary files, and all of those other things that you think you need to reboot the system in order to finish the work. If you need help on doing this, ask. You’ll be doing everyone a great service.

5. FLEXIBILITY OF INSTALLATION SCOPE

Ideally, a given package should be able to install into different installation scopes (OS/Global scope, Restricted/User scope, and Local/Sandboxed scope) and support installation into online and offline (VM Images) systems. Packaging systems should consider how they can help products to be fully installed in these scopes.

6. INSTALLATION IS NOT CONFIGURATION

Software installation on Windows has since time began, been conflating configuration with installation. This approach introduces several painful problems into the software installation process:

** This increases the amount of UI during installation, which only leads to additional confusion for the end user. ** Users may not know the answers to configuration questions, and are now blocked until they can find answers. ** Configuration during installation is nearly always significantly different than the process to configure (or ‘re-configure’) the product after installation. Again, confusing to the user. ** Migrating a working configuration to another system is harder when you have to answer during installation. Configuration should be easily portable between installations. ** Increases friction for end-users who are trying to automate the installation of software for large numbers of systems.

Really, don’t be that guy.

7. RESPECT THE RESOURCES OF THE TARGET SYSTEM

Software publishers need to respect the system to which their software is being installed. You don’t own that system, the end user does. Common scenarios that can be disrespectful

** Launching straight from the installer — Installation should not be considered good opportunity to launch your application. Similar to configuration issues, this is frustrating to end-users who are looking to automate the installations, and can introduce confusion for users who may not have expected that.

** Automatically starting software at system start — The proliferation of software that insists on starting up with the OS automatically is getting out of control. Software that wishes to launch at start-up should get explicit opt-in consent from the user (after the user has launched the application), not require the user to hunt down the option from a sea of configuration settings to disable it. Oh, and not providing a method to trivially disable auto-start is very bad.

** Checking for software updates — There are two acceptable methods for automatically checking for software updates. Preferred: checking from within the application itself (ie, at startup) and elegantly handling update and restart. Acceptable: Launching an update checker via a scheduled task, checking and then exiting. Wrong: Auto-starting a background or tray-application to constantly check for updates.

  • Personal Opinion: This last one is particularly frustrating. Since Windows doesn’t currently have a built-in 3rd party update service (like Windows Update) that will on a schedule check for updates, download and install them, many companies have resorted to running bloated, wasteful apps in the background, waiting for updates. This is terribly disrespectful to the end user’s system, and offers absolutely nothing of value to the user that a scheduled task wouldn’t accomplish with less effort.

8. CONSISTENT EXPERIENCE ACROSS CONTEXTS

Finally, regardless of underlying technology, there should be a common set of commands, tools and processes that allows users to install whatever software in the way that they’d like. Currently, we see that individual installation technologies are all headed in different directions, which makes automating the installation of some pieces of software a nightmare. We as a community need to have the ability to bring all of these pieces of software together without having to manually script each individual combination.

Easy and Practical Tips on How To Achieve Code Maintainability

code_refactoring[1]No such thing as prototyping. I had the pleasure to work with a team who had to maintain and extend a project that started its life as a prototype. A couple of bright programmers put together a concept they thought would benefit their company.

 

It was quickly slapped together and shown to their bosses and a marketing type or two. I know a few of you know where I’m going with this. Within a month, the marketing types had a client signed on to use this new service. Having been thrown together without any real architecture, the product was a real problem to maintain.

I’m not saying never prototype, but when you do, keep in mind, this code may not be thrown away. Code the prototype using good software design skills.

Use TDD. I’m going to approach this from a different direction than most do. Yes, Test Driven Development is great for maintainability because if you break something, your tests will let you know it. But it also is a great way to document your code. Documentation in the comments is often wrong because the code changes, but the documentation doesn’t, or because the original programmer does not take the time to write well understood documentation. Or, most likely, the original programmer never gets around to writing the comments at all. As long as the tests are being run, they will usually be a reflection of how the programmer expected the process to perform.

If you fully test your code, the next programmer can get a really good idea of what your were thinking about by how you setup your tests. There is a lot you can learn by how you build the objects you pass into your code.

Don’t be cute. Sure, that way of using a while loop instead of a for loop is cool and different, but the next programmer has no idea why you did it. If you need to do something non-standard, document it in place and include why.

Peer review. We all get in a hurry trying to meet a deadline and the brain locks up when trying to remember how to do something simple. We write some type of hack to get around it, thinking we’ll go back and fix it later when more sleep and more caffeine have done their magic. But by then, you’ve slept and forgotten all about the hack. Having to explain why you did it that way to another programmer keeps some really bizarre code from getting checked in.

Build process and dependency control. At first glance, this may not seem to be an important part of writing maintainable code. Starting on any project, there is a huge curve in getting to know and understand that project. If you can get past spending time to figure out what dependencies the project requires and what settings you have to change on your IDE, you’ve cut down a bit on the time it takes to maintain the project.

Read, read, and read some more. It’s a great time to be a programmer. There are tons of articles and blogs that contain sample code all over the Internet. The publishing industry is trying hard to keep up with the ever-changing landscape. Reading code that contains best practices is an obvious way to improve your own code and help you create maintainable code. But also, reading code that does not follow best practices is a great way to see how not to do it. The trick is to know the difference between the two.

Refactor. When you have the logic worked out and your code now works, take some time to look through your code and see where you can tighten it up. This is a good time to see if you’ve repeated code that can be moved into its own method. Use this time to read the code like a maintenance programmer. Would you understand this code if you saw it for the first time?

Leave the code in better condition than when you found it. Many of us are loath to change working code just because it’s “ugly,” and I’m not given out a license to wholesale refactor any process you open in an editor. If you’re updating code that is surrounded by hard-to-maintain code, don’t take that as permission to write more bad code.

Final Thoughts

Thinking that your code will never be touched again is many things, but especially unrealistic. At the very least, business requirements change and may require modifications to your code. You may even be the person that has to maintain it, and trust me, after 6 months or so of writing other code, you will not be in the same frame of mind. Spend some time writing code that won’t be cursed by the next programmer.

How To : Use the REST API and AngularJS to Create a Web Part to retrieve List Items

Introduction

This article explains how to get the data from a SharePoint List using Angular JavaScript and the REST API. I used the REST API to talk to SharePoint and get the data from the list.

In this script we just see that we have first created an Angular Controller with the name “spCustomerController”. We have also injected $scope and $http service.

The $http service will fetch the list data from the specific columns of the SharePoint list. $scope is a glue between a Controller and a View. It acts as execution context for Expressions. Angular expressions are code snippets that are usually placed in bindings such as {{ expression }}.

Angular Controller.jpg

Use the following procedure to create a sample.

  1. <h1>WelCome To Angular JS Sharepoint 2013 REST API !!</h1>  
  2.   
  3. <div ng-app=“SharePointAngApp” class=“row”>  
  4.     <div ng-controller=“spCustomerController” class=“span10”>  
  5.         <table class=“table table-condensed table-hover”>  
  6.             <tr>  
  7.                 <th>Title</th>  
  8.                 <th>Employee</th>  
  9.                 <th>Company</th>  
  10.                  
  11.             </tr>  
  12.             <tr ng-repeat=“customer in customers”>  
  13.                 <td>{{customer.Title}}</td>  
  14.                 <td>{{customer.Employee}}</td>  
  15.                 <td>{{customer.Company}}</td>  
  16.                 </tr>  
  17.         </table>  
  18.     </div>  
  19. </div>  

 

Step 1: Navigate to your SharePoint 2013 site.

Step 2: From this page select the Site Actions | Edit Page.

Edit the page, go to the “Insert” tab in the Ribbon and click the “Web Part” option. In the “Web Parts” picker area, go to the “Media and Content” category, select the “Script Editor” Web Part and press the “Add button”.

Step 3: Once the Web Part is inserted into the page, you will see an “EDIT SNIPPET” link; click it. You can insert the HTML and/or JavaScript as in the following

  1. <style>  
  2. table, td, th {  
  3.     border: 1px solid green;  
  4. }  
  5.   
  6. th {  
  7.     background-color: green;  
  8.     color: white;  
  9. }  
  10. </style>  
  11. <script src=https://ajax.googleapis.com/ajax/libs/angularjs/1.0.1/angular.min.js&#8221;></script>  
  12. <script src=http://code.jquery.com/ui/1.10.3/jquery-ui.min.js&#8221;></script>  
  13.   
  14. <script>  
  15.       
  16.   
  17.     var myAngApp = angular.module(‘SharePointAngApp’, []);  
  18.     myAngApp.controller(‘spCustomerController’, function ($scope, $http) {  
  19.         $http({  
  20.             method: ‘GET’,  
  21.             url: _spPageContextInfo.webAbsoluteUrl + “/_api/web/lists/getByTitle(‘InfoList’)/items?$select=Title,Employee,Company”,  
  22.             headers: { “Accept”: “application/json;odata=verbose” }  
  23.         }).success(function (data, status, headers, config) {  
  24.             $scope.customers = data.d.results;  
  25.         }).error(function (data, status, headers, config) {  
  26.          
  27.         });  
  28.     });  
  29.       
  30. </script>  
  31.   
  32. <h1> Angular JS SharePoint 2013 REST API !!</h1>  
  33.   
  34. <div ng-app=“SharePointAngApp” class=“row”>  
  35.     <div ng-controller=“spCustomerController” class=“span10”>  
  36.         <table class=“table table-condensed table-hover”>  
  37.             <tr>  
  38.                 <th>Title</th>  
  39.                 <th>Employee</th>  
  40.                 <th>Company</th>  
  41.                  
  42.             </tr>  
  43.             <tr ng-repeat=“customer in customers”>  
  44.                 <td>{{customer.Title}}</td>  
  45.                 <td>{{customer.Employee}}</td>  
  46.                 <td>{{customer.Company}}</td>  
  47.                 </tr>  
  48.         </table>  
  49.     </div>  
  50. </div>  

 

Finally the result show look as below:

 

Finally result show.jpg

How To : Develop and Deploy Azure Applications Deep Dive (Part 1)

micorosftazurelogo[1]

 

Senior C# SharePoint Developer with 10 years experience and BSC Degree looking for serious new technical challenges and collaborative, team environment (Gauteng, South Africa)

CV available at http://1drv.ms/1lR6qtO

hero-for-hire_basic-layout_600

There are many reasons to deploy an application or services onto Azure, the Microsoft cloud services platform. These include reducing operation and hardware costs by paying for just what you use, building applications that are able to scale almost infinitely, enormous storage capacity, geo-location … the list goes on and on.

Yet a platform is intellectually interesting only when developers can actually use it. Developers are the heart and soul of any platform release—the very definition of a successful release is the large number of developers deploying applications and services on it. Microsoft has always focused on providing the best development experience for a range of platforms—whether established or emerging—with Visual Studio, and that continues for cloud computing. Microsoft added direct support for building Azure applications to Visual Studio 2010 and Visual Web Developer 2010 Express.

This article will walk you through using Visual Studio 2010 for the entirety of the Azure application development lifecycle. Note that even if you aren’t a Visual Studio user today, you can still evaluate Azure development for free, using the Azure support in Visual Web Developer 2010 Express.

Creating a Cloud Service

Start Visual Studio 2010, click on the File menu and choose New | Project to bring up the New Project dialog. Under Installed Templates | Visual C# (or Visual Basic), select the Cloud node. This displays an Enable Azure Tools project template that, when clicked, will show you a page with a button to install the Azure Tools for Visual Studio.

Before installing the Azure Tools, be sure to install IIS on your machine. IIS is used by the local development simulation of the cloud. The easiest way to install IIS is by using the Web Platform Installer available at microsoft.com/web. Select the Platform tab and click to include the recommended products in the Web server.

Download and install the Azure Tools and restart Visual Studio. As you’ll see, the Enable Azure Tools project template has been replaced by a Azure Cloud Service project template. Select this template to bring up the New Cloud Service Project dialog shown in Figure 1. This dialog enables you to add roles to a cloud service.

image: Adding Roles to a New Cloud Service Project

Figure 1 Adding Roles to a New Cloud Service Project

A Azure role is an individually scalable component running in the cloud where each instance of a role corresponds to a virtual machine (VM) instance.

There are two types of role:

  • A Web role is a Web application running on IIS. It is accessible via an HTTP or HTTPS endpoint.
  • A Worker role is a background processing application that runs arbitrary .NET code. It also has the ability to expose Internet-facing and internal endpoints.

As a practical example, I can have a Web role in my cloud service that implements a Web site my users can reach via a URL such as http://%5Bsomename%5D.cloudapp.net. I can also have a Worker role that processes a set of data used by that Web role.

I can set the number of instances of each role independently, such as three Web role instances and two Worker role instances, and this corresponds to having three VMs in the cloud running my Web role and two VMs in the cloud running my Worker role.

You can use the New Cloud Service Project dialog to create a cloud service with any number of Web and Worker roles and use a different template for each role. You can choose which template to use to create each role. For example, you can create a Web role using the ASP.NET Web Role template, WCF Service Role template, or the ASP.NET MVC Role template.

After adding roles to the cloud service and clicking OK, Visual Studio will create a solution that includes the cloud service project and a project corresponding to each role you added. Figure 2 shows an example cloud service that contains two Web roles and a Worker role.

image: Projects Created for Roles in the Cloud Service

Figure 2 Projects Created for Roles in the Cloud Service

The Web roles are ASP.NET Web application projects with only a couple of differences. WebRole1 contains references to the following assemblies that are not referenced with a standard ASP.NET Web application:

  • Microsoft.WindowsAzure.Diagnostics (diagnostics and logging APIs)
  • Microsoft.WindowsAzure.ServiceRuntime (environment and runtime APIs)
  • Microsoft.WindowsAzure.StorageClient (.NET API to access the Azure storage services for blobs, tables and queues)

The file WebRole.cs contains code to set up logging and diagnostics and a trace listener in the web.config/app.config that allows you to use the standard .NET logging API.

The cloud service project acts as a deployment project, listing which roles are included in the cloud service, along with the definition and configuration files. It provides Azure-specific run, debug and publish functionality.

It is easy to add or remove roles in the cloud service after project creation has completed. To add other roles to this cloud service, right-click on the Roles node in the cloud service and select Add | New Web Role Project or Add | New Worker Role Project. Selecting either of these options brings up the Add New Role dialog where you can choose which project template to use when adding the role.

You can add any ASP.NET Web Role project to the solution by right-clicking on the Roles node, selecting Add | Web Role Project in the solution, and selecting the project to associate as a Web role.

To delete, simply select the role to delete and hit the Delete key. The project can then be removed.

You can also right-click on the roles under the Roles node and select Properties to bring up a Configuration tab for that role (see Figure 3). This Configuration tab makes it easy to add or modify the values in both the ServiceConfiguration.cscfg and ServiceDefinition.csdef files.

Figure 3 Configuring a Role

Figure 3 Configuring a Role

When developing for Azure, the cloud service project in your solution must be the StartUp project for debugging to work correctly. A project is the StartUp project when it is shown in bold in the Solution Explorer. To set the active project, right-click on the project and select Set as StartUp project.

Data in the Cloud

Now that you have your solution set up for Azure, you can leverage your ASP.NET skills to develop your application.

As you are coding, you’ll want to consider the Azure model for making your application scalable. To handle additional traffic to your application, you increase the number of instances for each role. This means requests will be load-balanced across your roles, and that will affect how you design and implement your application.

In particular, it dictates how you access and store your data. Many familiar data storage and retrieval methods are not scalable, and therefore are not cloud-friendly. For example, storing data on the local file system shouldn’t be used in the cloud because it doesn’t scale.

To take advantage of the scaling nature of the cloud, you need to be aware of the new storage services. Azure Storage provides scalable blob, queue, and table storage services, and Microsoft SQL Azure provides a cloud-based relational database service built on SQL Server technologies. Blobs are used for storage of named files along with metadata. The queue service provides reliable storage and delivery of messages. The table service gives you structured storage, where a table is a set of entities that each contain a set of properties.

To help developers use these services, the Azure SDK ships with a Development Storage service that simulates the way these storage services run in the cloud. That is, developers can write their applications targeting the Development Storage services using the same APIs that target the cloud storage services.

Debugging

To demonstrate running and debugging on Azure locally, let’s use one of the samples from code.msdn.microsoft.com/windowsazuresamples. This MSDN Code Gallery page contains a number of code samples to help you get started with building scalable Web application and services that run on Azure. Download the samples for Visual Studio 2010, then extract all the files to an accessible location like your Documents folder.

The Development Fabric requires running in elevated mode, so start Visual Studio 2010 as an administrator. Then, navigate to where you extracted the samples and open the Thumbnails solution, a sample service that demonstrates the use of a Web role and a Worker role, as well as the use of the StorageClient library to interact with both the Queue and Blob services.

When you open the solution, you’ll notice three different projects. Thumbnails is the cloud service that associates two roles, Thumbnails_WebRole and Thumbnails_WorkerRole. Thumbnails_WebRole is the Web role project that provides a front-end application to the user to upload photos and adds a work item to the queue. Thumbnails_WorkerRole is the Worker role project that fetches the work item from the queue and creates thumbnails in the designated directory.

Add a breakpoint to the submitButton_Click method in the Default.aspx.cs file. This breakpoint will get hit when the user selects an image and clicks Submit on the page.

 
protected void submitButton_Click(
  object sender, EventArgs e) {
  if (upload.HasFile) {
    var name = string.Format("{0:10}", DateTime.Now.Ticks, 
      Guid.NewGuid());
    GetPhotoGalleryContainer().GetBlockBlob(name).UploadFromStream(upload.FileContent);

Now add a breakpoint in the Run method of the worker file, WorkerRole.cs, right after the code that tries to retrieve a message from the queue and checks if one actually exists. This breakpoint will get hit when the Web role puts a message in the queue that is retrieved by the worker.

 
while (true) {
  try {
    CloudQueueMessage msg = queue.GetMessage();
    if (msg != null) {
      string path = msg.AsString

To debug the application, go to the Debug menu and select Start Debugging. Visual Studio will build your project, start the Development Fabric, initialize the Development Storage (if run for the first time), package the deployment, attach to all role instances, and then launch the browser pointing to the Web role (see Figure 4).

image: Running the Thumbnails Sample

Figure 4 Running the Thumbnails Sample

At this point, you’ll see that the browser points to your Web role and that the notifications area of the taskbar shows the Development Fabric has started. The Development Fabric is a simulation environment that runs role instances on your machine in much the way they run in the real cloud.

Right-click on the Azure notification icon in the taskbar and click on Show Development Fabric UI. This will launch the Development Fabric application itself, which allows you to perform various operations on your deployments, such as viewing logs and restarting and deleting deployments (see Figure 5). Notice that the Development Fabric contains a new deployment that hosts one Web role instance and one Worker role instance.

image: The Development Fabric

Figure 5 The Development Fabric

Look at the processes that Visual Studio attached to (Debug/Windows/Processes); you’ll notice there are three: WaWebHost.exe, WaWorkerHost.exe and iexplore.exe.

WaWebHost (Azure Web instance Host) and WaWorkerHost (Azure Worker instance Host) host your Web role and Worker role instances, respectively. In the cloud, each instance is hosted in its own VM, whereas on the local development simulation each role instance is hosted in a separate process and Visual Studio attaches to all of them.

By default, Visual Studio attaches using the managed debugger. If you want to use another one, like the native debugger, pick it from the Properties of the corresponding role project. For Web role projects, the debugger option is located under the project properties Web tab. For Worker role projects, the option is under the project properties Debug tab.

By default, Visual Studio uses the script engine to attach to Internet Explorer. To debug Silverlight applications, you need to enable the Silverlight debugger from the Web role project Properties.

Browse to an image you’d like to upload and click Submit. Visual Studio stops at the breakpoint you set inside the submitButton_Click method, giving you all of the debugging features you’d expect from Visual Studio. Hit F5 to continue; the submitButton_Click method generates a unique name for the file, uploads the image stream to Blob storage, and adds a message on the queue that contains the file name.

Now you will see Visual Studio pause at the breakpoint set in the Worker role, which means the worker received a message from the queue and it is ready to process the image. Again, you have all of the debugging features you would expect.

Hit F5 to continue, the worker will get the file name from the message queue, retrieve the image stream from the Blob service, create a thumbnail image, and upload the new thumbnail image to the Blob service’s thumbnails directory, which will be shown by the Web role.

Next installment, we will look at the Deployment Process

A Look At : The New Search Functionality in SharePoint Online and how Developers can make use of it

SharePointOnline2L-1[2]hero-for-hire_basic-layout_600http://en.gravatar.com/sharepointsamurai/

 

Search functionality in SharePoint 2013 includes several enhancements, custom content processing and a new framework for presenting search result types. SharePoint Server 2013 presents a new search architecture that includes substantial changes and additions to the search components and databases.

Also, there have been significant enhancements made to the Keyword Query Language (KQL).

Some of the features and functionalities have been depreciated from the previous version of SharePoint 2013. There has been a more search user interface improvement which brings the user more interactive with search results. For example, users can rest the pointer over a search result to see the content preview in the hover panel to the right of the result.

Now you can see Office 365 SharePoint 2013 and its admin features of Search Service Application. It’s a breakthrough advancing; nearly all the new features listed here are missed in Office 365 – SharePoint 2010. The following screen capture shows the SharePoint central administrator view for the Search section.

Manage all aspects of the Search experience for your end users improving the relevancy of your results per your content and metadata.

Search helps users quickly return to important sites and documents by remembering what they have previously searched and clicked. The results of previously searched and clicked items are displayed as query suggestions at the top of the results page.

In addition to the default manner in which search results are differentiated, site collection administrators and site owners can create and use result types to customize how results are displayed for important documents. A result type is a rule that identifies a type of result and a way to display it.

 

Manage Search Schema

Managed properties are used to restrict search results, and present the content of the properties in search results. Crawled properties are automatically extracted from crawled content. All the changes to properties will take effect only after the next full crawl.

Under the search schema section, administrator can:

  • View, create, or modify Managed Properties and map crawled properties to managed properties
  • View or modify Crawled Properties, or to view crawled properties in a particular category
  • View or modify Categories, or view crawled properties in a particular category.

While creating a new managed property, the ‘Mappings to crawled properties’ is one of the key attributes for the configuration set in our new property.

 

 

Manage Search Dictionaries

  Taxonomy Term Store  
People Search Dictionaries System
Department Company Exclusions Hashtags
Job Title Company Inclusions Keywords
Location Query Spelling Exclusions Orphaned terms
  Query Spelling Includings  

 

Manage Authoritative Pages

Search in SharePoint 2013 will analyze the collection of authoritative and non-authoritative pages to determine the ranking of search results. The authoritative sites are of two kinds:

  • Authoritative Site Pages
  • Non-authoritative Site Pages

Authoritative site pages are the links, which administrator authorized to be the most relevant information. There can be multiple authoritative pages in each environment. There is an option for specifying second and third-level authorities for search ranking. Non-authoritative site pages are the content from certain sites can be ranked lower than the rest of the content in the site.

 

Query Suggestion Settings

SharePoint Search comprises various features that you can leverage for building productivity solutions. One of the interesting and useful competencies are Query Suggestions. The query suggestions are administrated by two options as follows:

  • Always Suggest Phrases
  • Never Suggest Phrases

Manage Result Sources

Result Sources are used to frame the search results and confederate queries to external sources, such as internet search engines, etc. Once the result source are defined, we can configure search web parts and query rule actions to use the result source.

How the Result Source is managed? A SharePoint Online administrator of SharePoint Online Tenant can manage result sources for all site collections and sites reside under the same tenant. A site collection administrator or a site owner can manage result sources for a site collection or a site, respectively.

SharePoint 2013 provides 16 pre-defined result sources. The pre-configured default result source is Local SharePoint Results. We can state a different result source as the default as per our requirement

.

While creating a new Result Source, there is Protocol and Query transform are the two important parameters which tells the Result Source what to do in the SharePoint.

Protocol – Local SharePoint for results from the index of this Search Service. OpenSearch 1.0/1.1 for results from a search engine that uses that protocol. Exchange for results from an exchange source. Remote SharePoint for results from the index of a search service hosted in another farm.

Query Transform – Change incoming queries to use this new query text instead. Include the incoming query in the new text by using the query variable “{searchTerms}“.

Use this to scope results. For example, to only return OneNote items, set the new text to “{searchTerms} fileextension=one“. Then, an incoming query “sharepoint” becomes “sharepoint fileextension=one“. Launch the Query Builder for additional options.

 

Manage Query Rules

Query rules are to conditionally stimulate the search results and show hunks of supplementary results based on the rules created in the SharePoint. In a query rule, you can specify conditions and correlated actions without any help of code. The user with Site Collection, Site owner permission level can create and manage the query rules.

 

Manage Query Client Types

Query Client Types are one of the new search features in SharePoint 2013. Client Type identifies an application where a search query is sent from. Applications are prioritized by tiers. Top tier has the highest priority. When resource limit is reached, query throttling becomes ON, and search system will process the queries from top tier to bottom tier.

System Client Types are available out-of-the box, and cannot be deleted. We can add a new custom Client Type by clicking on New Client Type.

 

Remove Search Results

To remove data from the search results, type the URLs which needed to remove from it. All the URLs listed in the textbox will be removed from search results immediately, once after the Remove Now button is clicked.

View Usage Reports

Here the administrator will be able to see the usage reports and search related report, example Query Rules usage by day, Top Queries by Day, etc.

Search Center Settings

In this setting, the default search system will be mapped. Usually the Enterprise Search Center site that has been created for search entire SharePoint sites in the organization.

Export Search Configuration

Create a file that includes all customized query rules, result sources, result types, ranking models and site search settings but not any that shipped with SharePoint, in the current tenant that can be imported to other tenants.

Import Search Configuration

If you have a search configuration you’d like to import, browse for it below. Settings imported from the file will be created and activated as part of the site. You can modify any of the settings after import.

Crawl Log Permissions

Grant users read access to crawl log information for this tenant.

Search Client Object Model

SharePoint 2013 Search includes a client object model (CSOM) that enables access to most of the Query object model functionality for online, on-premises, and mobile development. You can use the Search CSOM to create client applications that run on a machine that does not have SharePoint 2013 installed to return SharePoint 2013 Preview search results.

The Search CSOM includes a Microsoft .NET Framework managed client object model and JavaScript object model, and it is built on SharePoint 2013. First, client code accesses the SharePoint CSOM. Then, client code accesses the Search CSOM.

NOTE: Custom search solutions in SharePoint Server 2013 do not support SQL syntax. Search in SharePoint 2013 supports FQL syntax and KQL syntax for custom search solutions.

We can configure crawled and managed properties. Configure Result Sources which were Federated Result / Scopes in SharePoint Search 2010.

 

Introduction to Business Connectivity Services (BCS)

BCS has the ability to connect and query the data sources and returns the results to the user through an external list, or app for SharePoint, or Office 2013. The Microsoft Office 2013 and SharePoint 2013 include Microsoft Business Connectivity Services (BCS).

The SharePoint 2013 and the Office 2013 suites include Microsoft Business Connectivity Services. With Business Connectivity Services, you can use SharePoint 2013 and Office 2013 clients as an interface into data that doesn’t live in SharePoint 2013 itself. It does this by making a connection to the data source, running a query, and returning the results.

Business Connectivity Services returns the results to the user through an external list, or app for SharePoint, or Office 2013 where you can perform different operations against them, such as Create, Read, Update, Delete, and Query (CRUDQ). Business Connectivity Services can access external data sources through Open Data (OData), Windows Communication Foundation (WCF) endpoints, web services, cloud-based services, and .NET assemblies, or through custom connectors.

Business Connectivity Services can access external data sources through Open Data (OData), Windows Communication Foundation (WCF) endpoints, web services, cloud-based services, and .NET assemblies, or through custom connectors. The Open Data Protocol is known as OData. It is an open web protocol for querying and updating data.

Business Connectivity Services uses SharePoint 2013 and Office 2013 as a client interface for data which doesn’t reside SharePoint 2013 environment.

The following screen capture is the BCS features and configuration options available under the SharePoint Administration Center in the Office 365.

Understanding Business Intelligence

Though Business Intelligence is technology driven, it is more about Business requirements and less about technology.  BI Champion/ Sponsor in the organization defines the vision and mission.

BI-infographic1[1]

The leader must have the ability to precisely define the Business intelligence requirements of the organization; the format of the reports; the relationship between the different data elements and version of the data to be used. The leaders need to specify how much history needs to be included; how often the data needs to be provided to the different stakeholders of the process.

BI is then driven by the business objective which may not necessarily always be to reduce the cost or increase the top line/ bottom line for an organization, but to use the data analysis in bringing efficiencies in the process or enhance the service experience for the customer.

The impetus for the BI initiative is the availability of data and technology for data analysis.  Ideally the organization should have at least two or three years of data in electronic format for meaningful analysis. The larger the volume of core data available in the systems, the more meaningful will be the output that can be expected from Business Intelligence.

However it must be recognized that data may not always be structured and data coming from multiple sources may not be in the same format. Data will have to be interpreted on the basis of logical assumptions and data gaps will have to be identified upfront so that changes can be made to business processes at the point of data capture.

The Management must be made aware that resource and time commitment for BI is much higher than resource commitment for IT projects and there must be a high level of management commitment to making the BI project a success.

  1. Business leaders and IT analysts will have to allocate substantial amount of time to mapping business requirements to IT system capabilities both during the design and implementation phase of the project.  The leaders and IT developers will have to constantly interact for a proper representation of the Business questions in terms of analytical outputs.

2.      IT professionals involved in the process will have to explain to the Business leadership the nature and meaning of the data that is available in the systems. They will have to clarify how exceptions are to be handled by the leadership and also the limitations of the systems. The implication is that IT resources will also have to make a time commitment for BI.

  1. Subject matter experts also have a significant role to play in Business Intelligence projects.  As ultimate users of the system, they are in the best position to test the outputs and validate the significance of the Queries and the outputs derived from such queries.  They have the experience and the ability to flag exceptions and help fix them.  This implies that subject matter experts too need to contribute a significant amount of time for the success of the project.  They may even need to join the project team on a full time basis during the testing phase.

The Project plan must reflect the level of business personnel who will be involved and the organization must be willing to release these people to join the project team as and when a request for their services is made.

It also follows that the project team and all those involved in BI must be open to learning and acquiring the skills that are essential for the effective functioning of the BI system that is being put in place.

Having said all this, it is necessary to point out that Business Intelligence cannot be a time bound project. Nor can the teams be disbanded with the first successful run of a data query or queries.

Query design, testing, redesign and use of new toolsets are inevitable. In short, BI is an evolving system that cannot be pinned down and bounded by traditional project management definitions.

The BI requirements change as business requirements evolve and change. No single requirement definition can be characterized as a permanent, unchanging requirement.  The Business leaders, IT analysts and Subject matter experts will have to be constantly engaged in designing and developing queries; testing the outputs on field formations; obtaining feedback on the usefulness or otherwise of the outputs and exceptions that need to be handled.  It is an iterative process.

It requires the institution of an agile system development and process management approach. Highly skilled personnel must constantly and continuously work together to deliver on the objectives of BI with little requirements being defined upfront with more requirements being designed and refined on the go.

Scope of work will have to be “time-boxed” to each cycle of the project and goals and objectives of each time box will have to be specified separately.  Cycles which cannot be completed within the time specified will have to be deferred and included as part of the future development cycles. As a result, traditional project management methodologies will fail.

Since change is the only constant in BI, definition of a change management strategy is an imperative. The strategy must be built around the recognition that Business requirements change, changing BI requirements/queries; resulting in a change in the type of toolsets used and the skillsets that are required by BI stakeholders.

It should be remembered that People are resistant to change. Consumers of BI reports are no exceptions. They need to be educated about the benefits of the exercise and the producers of the data must be aligned to ensure data quality is never compromised by placing appropriate controls in the systems.

Training needs must be studied, documented; trainings organized whenever there is a change in any one or more components that operationalize the Business Intelligence unit.

The organization must also recognize that reporting requirements and formats will change with every change in the BI requirement. All reporting formats cannot be axed at once and all reporting formats cannot change overnight. The change must be planned and initiated based on schedules that have been agreed upon by the different stakeholders.

Existing reports and tools should be retired gradually and transition periods must be orchestrated carefully and thoughtfully. Trainings must be organized to transition all stakeholders to the new formats.  Organization of workshops and change management seminars must be part and parcel of the Business Intelligence unit’s functioning.

Finally, it must be reiterated that BI is not a project. It is a program.  The solution must dovetail into the existing environment and reinforce the business processes that are in use.  Any data extraction exercise for BI must be done without disturbing the workflow in the organization or impacting the reliability of the information that is being gathered during business operations.

How To : Understand and Edit the Onet.xml File

Site-Definition-03.png_2D00_700x0[1]

When Microsoft SharePoint Foundation is installed, several Onet.xml files are installed—one in %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\GLOBAL\XML that applies globally to the deployment, and several in different folders within %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\SiteTemplates. Each file in the latter group corresponds to a site definition that is included with SharePoint Foundation. They include, for example, Blog sites, the Central Administration site, Meeting Workspace sites, and team SharePoint sites. Only the last two of these families contain more than one site definition configuration.

The global Onet.xml file defines list templates for hidden lists, list base types, a default definition configuration, and modules that apply globally to the deployment. Each Onet.xml file in a subdirectory of the %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\SiteTemplates directory can define navigational areas, list templates, document templates, configurations, modules, components, and server email footers that are used in the site definition to which it corresponds.

Note Note
An Onet.xml is also part of a web template. Some Collaborative Application Markup Language (CAML) elements that are possible in the Onet.xml files of site definitions cannot be in the Onet.xml files that are part of web templates—for example, the DocumentTemplates element.
Depending on where an Onet.xml file is located and whether it is part of a site definition or a web template, the markup in the file does some or all of the following:

  • Specifies the web-scoped and site collection-scoped Features that are built-in to websites that are created from the site definition or web template.
  • Specifies the list types, pages, files, and Web Parts that are built-in to websites that are created from the site definition or web template.
  • Defines the top and side navigation areas that appear on the home page and in list views for a site definition.
  • Specifies the list definitions that are used in each site definition and whether they are available for creating lists in the user interface (UI).
  • Specifies document templates that are available in the site definition for creating document library lists in the UI, and specifies the files that are used in the document templates.
  • Defines the base list types from which default SharePoint Foundation lists are derived. (Only the global Onet.xml file serves this function. You cannot define new base list types.)
  • Specifies SharePoint Foundation components.
  • Defines the footer section used in server email.

 

You can perform the following kinds of tasks in a custom Onet.xml file that is used for either a custom site definition or a custom web template:

  • Specify an alternative cascading style sheet (CSS) file, JavaScript file, or ASPX header file for a site definition.
  • Modify navigation areas for the home page and list pages.
  • Add a new list definition as an option in the UI.
  • Define one configuration for the site definition or web template, specifying the lists, modules, files, and Web Parts that are included when the configuration is instantiated.
  • Specify Features to be included automatically with websites that are created from the site definition or web template.

You can perform the following kinds of tasks in a custom Onet.xml file that is used for a custom site definition, but not in one that is used for a custom web template:

  1. Add a document template for creating document libraries.
  2. Define more than one configuration for a site definition, specifying the lists, modules, files, and Web Parts that are included when the configuration is instantiated.
  3. Define a custom footer for email messages that are sent from websites that are based on the site definition.
  4. Define custom components, such as a file dialog box post processor, for websites that are based on the site definition.
Caution note Caution
You cannot create new base list types in either a site definition or a web template. The base types that are defined in the global Onet.xml file are the only base types that are supported.
Caution note Caution
We do not support making changes to an originally installed Onet.xml file. Changing this file can break existing sites. Also, when you install updates or service packs for SharePoint Foundation, or when you upgrade an installation to the next product version, there may be a new version of the Microsoft-supplied file, and installation cannot merge your changes with the new version. If you want a site type that is similar to a built-in site type, and you cannot use a web template, create a new site definition with its own Onet.xml file; do not modify the original file. For more information, see How to: Create a Custom Site Definition and Configuration. For more information about when you cannot use a web template, see Deciding Between Custom Web Templates and Custom Site Definitions.
The following sections define the various elements of the Onet.xml file.

Project Element

The top-level Project element specifies a default name for sites that are created through any of the site configurations in the site definition. It also specifies the directory that contains subfolders in which the files for each list definition reside.

Note Note
Unless indicated otherwise, excerpts used in the following examples are taken from the Onet.xml file for the STS site definition.
<Project 
  Title="$Resources:core,onet_TeamWebSite;" 
  Revision="2" 
  ListDir="$Resources:core,lists_Folder;" 
  xmlns:ows="Microsoft SharePoint" 
  UIVersion="4">

NoteNote
In all the examples in this topic, the strings that begin with “$Resources” are constants that are defined in a .resx file. For example, “$Resources:onet_TeamWebSite” is defined in the core.resx file as “Team Site”. When you create a custom Onet.xml file, you can use literal strings.

This element can also have several other attributes. For more information, see Project Element (Site).

The Project element does not contain any attribute that identifies the site definition that it defines. Each Onet.xml file is associated with a site definition by virtue of the directory path in where it resides, which (except for the global Onet.xml) is %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\SiteTemplates\site_type\XML\, where site_type is the name of the site definition, such as STS or MPS. The Onet.xml file for a web template is associated with the template by virtue of being in the .wsp package for the web template.

 

NavBars Element

The NavBars element contains definitions for the top navigation area that is displayed on the home page or in list views, and definitions for the side navigation area that is displayed on the home page.

Note Note
A NavBar is not necessarily a toolbar. For example, it can be a tree of links.
<NavBars>
  <NavBar 
    Name="$Resources:core,category_Top;" 
    Separator="&amp;nbsp;&amp;nbsp;&amp;nbsp;" 
    Body="&lt;a ID='onettopnavbar#LABEL_ID#' href='#URL#' accesskey='J'&gt;#LABEL#&lt;/a&gt;" 
    ID="1002" />
  <NavBar 
    Name="$Resources:core,category_Documents;" 
    Prefix="&lt;table border='0' cellpadding='4' cellspacing='0'&gt;" 
    Body="&lt;tr&gt;&lt;td&gt;&lt;table border='0' cellpadding='0' cellspacing='0'&gt;&lt;tr&gt;&lt;td&gt;&lt;img src='/_layouts/images/blank.gif' id='100' alt='' border='0'&gt;&amp;nbsp;&lt;/td&gt;&lt;td valign='top'&gt;&lt;a id='onetleftnavbar#LABEL_ID#' href='#URL#'&gt;#LABEL#&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;/td&gt;&lt;/tr&gt;" 
    Suffix="&lt;/table&gt;" 
    ID="1004" />
    ...
</NavBars>

A NavBarLink element defines links for the top or side navigational area, and an entire NavBar section groups new links in the side area. Each NavBar element specifies a display name and a unique ID for the navigation bar, and it defines how to display the navigation bar.

For information about customizing the navigation areas on SharePoint Foundation pages, see Website Navigation.

ListTemplates Element

The ListTemplates section specifies the list definitions that are part of a site definition. This markup is still supported only for backward compatibility. New custom list types should be defined as Features. The following example is taken from the Onet.xml file for the Meetings Workspace site definition.

<ListTemplates>
  <ListTemplate 
    Name="meetings" 
    DisplayName="$Resources:xml_onet_mwsidmeetingDisp;" 
    Type="200" 
    BaseType="0" 
    Unique="TRUE" 
    Hidden="TRUE" 
    HiddenList="TRUE" 
    DontSaveInTemplate="TRUE" 
    SecurityBits="11" 
    Description="$Resources:xml_onet_mwsidmeetingDesc;"
    Image="/_layouts/images/itevent.gif">
  </ListTemplate>
  <ListTemplate 
    Name="agenda" 
    DisplayName="$Resources:xml_onet_mwsidagendaDisp;" 
    Type="201" 
    BaseType="0" 
    FolderCreation="FALSE" 
    DisallowContentTypes="TRUE" 
    SecurityBits="11" 
    Description="$Resources:xml_onet_mwsidagendaDesc" 
    Image="/_layouts/images/itagnda.gif">
  </ListTemplate>
    ...
</ListTemplates>

Each ListTemplate element specifies an internal name that identifies the list definition. The ListTemplate element also specifies a display name for the list definition and whether the option to add a link on the Quick Launch bar appears selected by default in the list-creation UI. In addition, this element specifies the description of the list definition and the path to the image that represents the list definition, both of which are displayed in the list-creation UI. If Hidden=”TRUE” is specified, the list definition does not appear as an option in the list-creation UI.

The ListTemplate element has two attributes for type: Type and BaseType. The Type attribute specifies a unique identifier for the list definition, and the BaseType attribute identifies the base list type for the list definition and corresponds to the Type value that is specified for one of the base list types that are defined in the global Onet.xml file.

For more information about creating new list types, see How to: Create a Custom List Definition.

DocumentTemplates Element

The DocumentTemplates section defines the document templates that are listed in the UI for creating a document library. This markup is still supported only for backward compatibility. You should define new document types as content types. For more information, see the Content Types section of this SDK.

<DocumentTemplates>
  ...
  <DocumentTemplate 
    Path="STS" 
    DisplayName="$Resources:core,doctemp_Word;" 
    Type="121" 
    Default="TRUE" 
    Description="$Resources:core,doctemp_Word_Desc;">
    <DocumentTemplateFiles>
      <DocumentTemplateFile 
        Name="doctemp\word\wdtmpl.dotx" 
        TargetName="Forms/template.dotx" 
        Default="TRUE" />
    </DocumentTemplateFiles>
  </DocumentTemplate>
  ...
</DocumentTemplates>

Each DocumentTemplate element specifies a display name, a unique identifier, and a description for the document template. If Default is set to TRUE, the template is the default template selected for document libraries that are created in sites based one of the configurations in the site definition. Despite its singular name, a DocumentTemplate element actually can contain a collection of DocumentTemplateFile elements. The Name attribute of each DocumentTemplateFile element specifies the relative path to a local file that serves as the template. The TargetName attribute specifies the destination URL of the template file when a document library is created. The Default attribute specifies whether the file is the default template file.

NoteNote
An Onet.xml file in a web template cannot have a DocumentTemplate element.

For a development task that involves document templates, see How to: Add a Document Template, File Type, and Editing Application to a Site Definition.

BaseTypes Element

The BaseTypes element of the global Onet.xml file is used during site or list creation to define the basic list types on which all list definitions in SharePoint Foundation are based. Each list template that is specified in the list templates section is identified with one of the base types: Generic List, Document Library, Discussion Forum, Vote or Survey, or Issues List.

Note Note
In SharePoint Foundation the BaseTypes section is implemented only in the global Onet.xml file, from which the following example is taken.
<BaseTypes>
  <BaseType 
    Title="Generic List" 
    Image="/_layouts/images/itgen.gif" 
    Type="0">
      <MetaData>
        <Fields>
          <Field 
            ID="{1d22ea11-1e32-424e-89ab-9fedbadb6ce1}" 
            ColName="tp_ID" 
            RowOrdinal="0" 
            ReadOnly="TRUE" 
            Type="Counter" 
            Name="ID" 
            PrimaryKey="TRUE" 
            DisplayName="$Resources:core,ID" 
            SourceID="http://schemas.microsoft.com/sharepoint/v3" 
            StaticName="ID">
          </Field>
          <Field 
            ID="{03e45e84-1992-4d42-9116-26f756012634}" 
            RowOrdinal="0" 
            Type="ContentTypeId" 
            Sealed="TRUE" 
            ReadOnly="TRUE" 
            Hidden="TRUE" 
            DisplayName="$Resources:core,Content_Type_ID;"
            Name="ContentTypeId" 
            DisplaceOnUpgrade="TRUE"
            SourceID="http://schemas.microsoft.com/sharepoint/v3" 
            StaticName="ContentTypeId" 
            ColName="tp_ContentTypeId">
          </Field>
          ...
      </Fields>
    </MetaData>
  </BaseType>
  ...
</BaseTypes>

Each BaseType element specifies the fields used in lists that are derived from the base type. The Type attribute of each Field element identifies the field with a field type that is defined in FldTypes.xml.

Caution noteCaution
Do not modify the contents of the global Onet.xml; doing so can break the installation. Base list types cannot be added. For information about how to add a list definition, see How to: Create a Custom List Definition.

Configurations Element

Each Configuration element in the Configurations section specifies the lists, modules, and Features that are created by default when the site definition configuration or web template is instantiated.

<Configurations>
  ...
  <Configuration 
    ID="0" 
    Name="Default">
    <Lists>
      <List 
        FeatureId="00BFEA71-E717-4E80-AA17-D0C71B360101" 
        Type="101" 
        Title="$Resources:core,shareddocuments_Title;" 
        Url="$Resources:core,shareddocuments_Folder;" 
        QuickLaunchUrl="$Resources:core,shareddocuments_Folder;/Forms/AllItems.aspx" />
      ...
    </Lists>
    <Modules>
      <Module 
        Name="Default" />
    </Modules>
    <SiteFeatures>
      <Feature 
        ID="00BFEA71-1C5E-4A24-B310-BA51C3EB7A57" />
      <Feature 
        ID="FDE5D850-671E-4143-950A-87B473922DC7" />
    </SiteFeatures>
    <WebFeatures>
      <Feature 
        ID="00BFEA71-4EA5-48D4-A4AD-7EA5C011ABE5" />
      <Feature 
        ID="F41CC668-37E5-4743-B4A8-74D1DB3FD8A4" />
    </WebFeatures>
  </Configuration>
  ...
</Configurations>

The ID attribute identifies the configuration (uniquely, relative to the other configurations, if any, within the Configurations element). If the Onet.xml file is part of a site definition, the ID value corresponds to the ID attribute of a Configuration element in a WebTemp*.xml file. (Web templates do not have WebTemp*.xml files.)

Each List element specifies the title of the list definition and the URL for where to create the list. You can use the QuickLaunchUrl attribute to set the URL of the view page to use when adding a link in the Quick Launch to a list that is created from the list definition. The value of the Type attribute corresponds to the Type attribute of a template in the list templates section. Each Module element specifies the name of a module that is defined in the modules section.

The SiteFeatures element and the WebFeatures element contain references to site collection and site-scoped Features to include in the site definition.

For post-processing capabilities, use an ExecuteUrl element within a Configuration element to specify the URL that is called following instantiation of the site.

For more information about definition configurations, see How to: Create a Custom Site Definition and Configuration.

Modules Element

The Modules collection specifies a pool of modules. Any module in the pool can be referenced by a configuration if the module should be included in websites that are created from the configuration. Each Module element in turn specifies one or more files to include, often for Web Parts, which are cached in memory on the front-end web server along with the schema files. You can use the Url attribute of the Module element to provision a folder as part of the site definition. This markup is supported only for backward compatibility. New modules should be incorporated into Features.

<Modules>
  <Modules>
    <Module 
      Name="Default" 
      Url="" 
      Path="">
      <File 
        Url="default.aspx" 
        NavBarHome="True">
        <View 
          List="$Resources:core,lists_Folder;
          /$Resources:core,announce_Folder;" 
          BaseViewID="0" 
          WebPartZoneID="Left" />
        <View 
          List="$Resources:core,lists_Folder;
          /$Resources:core,calendar_Folder;" 
          BaseViewID="0" 
          RecurrenceRowset="TRUE" 
          WebPartZoneID="Left" 
          WebPartOrder="2" />
        <AllUsersWebPart 
          WebPartZoneID="Right" 
          WebPartOrder="1"><![CDATA[<WebPart 
            xmlns="http://schemas.microsoft.com/WebPart/v2"
            xmlns:iwp="http://schemas.microsoft.com
            /WebPart/v2/Image">
            <Assembly>Microsoft.SharePoint, Version=12.0.0.0, 
              Culture=neutral, 
              PublicKeyToken=71e9bce111e9429c</Assembly>
            <TypeName>Microsoft.SharePoint.WebPartPages.ImageWebPart
            </TypeName>
            <FrameType>None</FrameType>
            <Title>$Resources:wp_SiteImage;</Title>
            <iwp:ImageLink>/_layouts/images/homepage.gif
            </iwp:ImageLink>
            <iwp:AlternativeText>$Resources:core,sitelogo_wss;
            </iwp:AlternativeText>
            </WebPart>]]>
        </AllUsersWebPart>
        <View 
          List="$Resources:core,lists_Folder;
          /$Resources:core,links_Folder;" 
          BaseViewID="0" 
          WebPartZoneID="Right" 
          WebPartOrder="2" />
          <NavBarPage 
            Name="$Resources:core,nav_Home;" 
            ID="1002" 
            Position="Start" />
          <NavBarPage 
            Name="$Resources:core,nav_Home;" 
            ID="0" 
            Position="Start" />
      </File>
    </Module>
  ...
</Modules>

The Module element specifies a name for the module, which corresponds to a module name that is specified within a configuration in Onet.xml.

The Url attribute of each File element in a module specifies the name of a file to create when a site is created. When the module includes a single file, such as default.aspx, NavBarHome=”TRUE” specifies that the file will serve as the destination page for the Home link in navigation bars. The File element for default.aspx also specifies the Web Parts to include on the home page and information about the home page for other pages that link to it.

A Module element can only be in an Onet.xml file that is part of a site definition, not in an Onet.xml file that is part of a web template.

For more information about using modules in SharePoint Foundation, see How to: Provision a File.

Components Element

The Components element specifies components to include in sites that are created through the definition.

<Components>
  <FileDialogPostProcessor ID="BDEADEE4-C265-11d0-BCED-00A0C90AB50F" />
</Components>

A Components element can only be included in an Onet.xml file that is part of a site definition, not in an Onet.xml file that is part of a web template.

ServerEmailFooter Element

The ServerEmailFooter element specifies the footer section used in email that is sent from the server.

<ServerEmailFooter>$Resources:ServerEmailFooter;</ServerEmailFooter>

A ServerEmailFooter element can only be included in an Onet.xml file that is part of a site definition, not in an Onet.xml file that is part of a web template.

How To : Use the Office 365 API Client Libraries (Javascript and .Net)

blog-office365

One of the cool things with today’s Office 365 API Tooling update is that you can now access the Office 365 APIs using libraries available for .NET and JavaScript.

 

\\8These libraries make it easier to interact with the REST APIs from the device or platform of your choice. And when I say platform of your choice, it really is! Office 365 API and the client libraries support the following project types in Visual Studio today:https://sharepointsamurai.wordpress.com/wp-admin/post.php?post=1625&action=edit&message=10

  1. NET Windows Store Apps
  2. .NET Windows Store Universal Apps
  3. Windows Forms Applications
  4. WPF Applications
  5. ASP.NET MVC Web Applications
  6. ASP.NET Web Forms Applications
  7. Xamarin Android and iOS Applications
  8. Multi-device Hybrid Apps

p.s: support for more projects coming on the way….

Few Things Before We Get Started

  • The authentication library is released as “alpha”.
    • If you don’t see something you want or if you think we missed addressing some scenarios/capabilities, let us know!
    • In this initial release of the authentication library, we focused on simplifying the getting started experience, especially for Office 365 services and not so much on the interoperability across other services (that support OAuth) but that’s something we can start looking for next updates to make it more generic.
  • The library is not meant to replace Active Directory Authentication Library (ADAL) but it is a wrapper over it (where it exists) which gives you a focused getting started experience.
    • However, If you want to opt out and go “DIY”, you still can.

Setting Up Authentication

The first step to accessing Office 365 APIs via the client library is to get authenticated with Office 365.

Once you configure the required Office 365 service and its permissions, the tool will add the required client libraries for authentication and the service into your project.

Lets quickly look at what authenticating your client looks like.

Getting Authenticated

Office 365 APIs use OAuth Common Consent Framework for authentication and authorization.

Below is the code to authenticate your .NET application:

Authenticator authenticator = new Authenticator();

AuthenticationInfo authInfo =
await authenticator.AuthenticateAsync(ExchangeResourceId);

Below is the JS code snippet used for authentication in Cordova projects:

var authContext = new O365Auth.Context();
authContext.getIdToken('https://outlook.office365.com/')
.then((function (token) {
    var client = new Exchange.Client('https://outlook.office365.com/ews/odata', 
                         token.getAccessTokenFn('https://outlook.office365.com'));
    client.me.calendar.events.getEvents().fetch()
        .then(function (events) {
            // get currentPage of events and logout
            var myevents = events.currentPage;
            authContext.logOut();
        }, function (reason) {
            // handle error
        });
}).bind(this), function (reason) {
    // handle error
});

Authenticator Class

The Authenticator class initializes the key stuff required for authentication:

1) Office 365 app client Id

2) Redirect URI

3) Authentication URI

You can find these settings in:

– For Web Applications – web.config

– For Windows Store Apps – App.xaml

– For Desktop Applications (Windows Forms & WPF) – AssemblyInfo.cs/.vb

– For Xamarin Applications – AssemblyInfo.cs

If you would like to provide these values at runtime and not from the config files, you can do so by using the alternate constructor:

image

To authenticate, you call the AuthenticateAsync method by passing the service’s resource Id:

AuthenticationInfo authInfo = await authenticator.AuthenticateAsync(ExchangeResourceId);

If you are using the discovery service, you can specify the capability instead of the resource Id:

AuthenticationInfo authInfo =
await authenticator.AuthenticateAsync("Mail", ServiceIdentifierKind.Capability);

The string to use for other services if you use discovery service: Calendar, Contacts and MyFiles

NOTE:

– For now, if you want to use the discovery service, you will also need to configure a SharePoint resource, either Sites or My Files. This is because the discovery service currently uses SharePoint resource Id.

– Active Directory Graph & Sites do not support discovery service yet

Depending on your client, the AuthenticateAsync will open the appropriate window for you to authenticate:

– For web applications, you will be redirected to login page to authenticate

– For Windows Store Apps, you will get dialog box to authenticate

– For desktop apps, you will get a dialog window to authenticate

image

AuthenticatorInfo Class

Once successfully authenticated, the method returns an AuthenticatorInfo object which helps you to get the required access token:

ExchangeClient client =
new ExchangeClient(new Uri(ExchangeServiceRoot), authInfo.GetAccessToken);

And also help you re-authenticate for a different resource when you create the service client.

AuthenticationInfo graphAuthInfo =
    await authInfo.ReauthenticateAsync("https://graph.windows.net/");

The library automatically handles token lifetime management by monitoring the expiration time of the access token and performing a refresh automatically.

Thats it! – Now you can make subsequent calls to the service to return the items you want!

Authentication Library

For .NET projects:

The library is available as a Nuget package. So, if you want to add it manually to your project without the tool, you could do so. However, you will have to manually register an app in the Azure Active Directory to authenticate against AAD.

Microsoft Office 365 Authentication Library for ASP.NET

Microsoft Office 365 Authentication Library for .NET (Android and iOS)

Microsoft Office 365 Authentication Library for ASP.NET

For Cordova projects:

You will need to use the Office 365 API tool which generates the aadgraph.js under the Scripts folder that handles authentication.

How To : Develop a Single-page Application in SharePoint

sharepointKnockoutJS

A single-page application in SharePoint

This app will be a single-page app and heavily javascript based, taking advantage of ajax and web services. As mentioned earlier, we’re going to base this app on the TodoMVC project. More specifically, we’re going to use the Knockout version of the TodoMVC app. So download the knockout todomvc app from github here and incorporate it into your project as follows:

  • Copy the js and bower_components folders into the Scripts folder. To do this quickly:
    1. Copy the folders in Windows Explorer
    2. Visual Studio, enable Project -> Show All Files
    3. The folders will now appear in Solution Explorer. Right click bower_components, and select Include In Project. Do the same for the js folder.
  • Copy the contents of index.html into Views/Index.cshtml (discard whatever is there).
  • Open Views/Index.cshtml and edit the script and css references to point to the Scripts folder (eg. find any references to bower_components and change it to Scripts/bower_components, do the same for js)
  • Open the Shared/_Layout.cshtml file and replace its contents with a single call to @RenderBody():

Your solution should now look like the following:

Hit F5 and you should now have a running TodoMVC app!

Data Storage

In previous articles I have described how to use the Azure database for storage of your data. In a provider-hosted app, you can equally as easily store your data in your own database. However, performance issues aside, it’s really handy to store data in the customer’s SharePoint system itself where possible. This has a number of advantages:

  • Security – you don’t need to store customer data in your own data center
  • App interoperability – if you have multiple apps talking to SharePoint, the architecture will be greatly simplified by by using SharePoint as the central data store
  • Transparency – customers can see their data transparently in lists and understand better what data is stored and how it’s used
  • Tight SharePoint integration – this is often useful since you can later easily take advantage of features such as workflows and event receivers.

In this article therefore I’ll use SharePoint lists for data storage. Let’s create a list to use for storage of our Todo items. Firstly, right-click on the TodoApp project, and select Add -> New Item. Choose List, and call it TodoList.

Click Add. On the next dialog, leave the list template as Default and click Finish.

Now you’ll be presented with a list designer form. By default it already has a Title column, so just add another column called Completed which should be a Boolean:

In my case, Completed was available as a site column. However, it was hidden by default. To remedy this, open up Schema.xml and ensure the Completed field has Hidden=FALSE:

Open up the TodoListInstance designer again and click the Views tab, and add the Completed column to the default view:

Now we’re going to add the server-side code for retrieving the list items. Firstly add a class called TodoItemViewModel and give it the relevant properties<!–. Note that the properties are not capitalised to match what we’ve got on the client side already–>:

    public class TodoItemViewModel
    {
        public string Title { get; set; }
        public bool Completed { get; set; }
    }
    

Next, open up the HomeController class and change the Index method to load the contents of the TodoList:

    [SharePointContextFilter]
    public ActionResult Index()
    {
        List<TodoItemViewModel> result = new List<TodoItemViewModel>();
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if(clientContext != null)
            {
                //Load list items
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItemCollection items = list.GetItems(CamlQuery.CreateAllItemsQuery());
                clientContext.Load(items);
                clientContext.ExecuteQuery();
                //Create the Todo item view models
                result = items.Select(li => new TodoItemViewModel()
                {
                    Title = (string)li["Title"],
                    Completed = (bool)li["Completed"]
                }).ToList();
            }
        }
        return View(result); //Pass the items into the view
    }
    

You may notice we’re using CreateAppOnlyClientContextForSPAppWeb. This means we are accessing the list under the App identity, not the user identity. This is so that we don’t need to require the user themselves to have any special permissions – this should an important consideration throughout your app, as different resources may be only available to certain users and this will form a crucial part of your security model.

Now, since we want to access the list under the app identity, we need to enable that by opening AppManifest.xml under TodoApp, click the Permissions tab and check the option:

Client Side Code with Knockout.js

Now we move to the client side. We’re going to do something a bit scary and rewrite the app.js file, which contains all of the Todo app JavaScript we imported. We’re going to rewrite it so that firstly, we understand it, and secondly, we can integrate it with our AJAX methods more easily. You can always go back to study the original later on as it’ll be more advanced and refined.

If you are new to the Knockout way of data-binding in JavaScript, it might be a good time to visit the knockout.js website to familiarise yourself with it. It’s very powerful and yet pretty easy to pick up. and it makes writing javascript applications incredibly fast and easy.

So open app.js, delete what’s already there, and let’s start by creating a simple ViewModel for each Todo item:

    window.TodoApp = window.TodoApp || {};
    
    window.TodoApp.Todo = function (id, title, completed) {
        var me = this;
        this.id = ko.observable(id);
        this.title = ko.observable(title);
        this.completed = ko.observable(completed || false);
        this.editing = ko.observable(false);

        // edit an item
        this.startEdit = function () {
            me.editing(true);
        };

        // stop editing an item
        this.stopEdit = function (data, event) {
            if (event.keyCode == 13) {
                me.editing(false);
            }
            return true;
        };
    };

    

Next up we add the main ViewModel:

    window.TodoApp.ViewModel = function (spHostUrl) {
        var me = this;
        this.todos = ko.observableArray(); //List of todos
        this.current = ko.observable(); // store the new todo value being entered
        this.showMode = ko.observable('all'); //Current display mode

        //List which is currently displayed
        this.filteredTodos = ko.computed(function () {
            switch (me.showMode()) {
            case 'active':
                return me.todos().filter(function (todo) {
                    return !todo.completed();
                });
            case 'completed':
                return me.todos().filter(function (todo) {
                    return todo.completed();
                });
            default:
                return me.todos();
            }
        });        

        this.addTodo = function (todo) {
            me.todos.push(todo);
        }

        // add a new todo, when enter key is pressed
        this.add = function (data, event) {
            if (event.keyCode == 13) {
                var current = me.current().trim();
                if (current) {        
                    var todo = new window.TodoApp.Todo(0, current);
                    me.addTodo(todo);
                    me.current('');
                }
            }
            return true;
        };

        // remove a single todo
        this.remove = function (todo) {
            me.todos.remove(todo);
        };

        // remove all completed todos
        this.removeCompleted = function () {
            var todos = me.todos().slice(0);
            for (var i = 0; i < todos.length; i++) {
                if (todos[i].completed()) {
                    me.remove(todos[i]);
                }
            }
        };

        // count of all completed todos
        this.completedCount = ko.computed(function () {
            return me.todos().filter(function (todo) {
                return todo.completed();
            }).length;
        });

        // count of todos that are not complete
        this.remainingCount = ko.computed(function () {
            return me.todos().length - me.completedCount();
        });

        // writeable computed observable to handle marking all complete/incomplete
        this.allCompleted = ko.computed({
            //always return true/false based on the done flag of all todos
            read: function () {
                return !me.remainingCount();
            },
            // set all todos to the written value (true/false)
            write: function (newValue) {
                me.todos().forEach(function (todo) {
                    // set even if value is the same, as subscribers are not notified in that case
                    todo.completed(newValue);
                });
            }
        });
    };

Take a read through the above JavaScript – I have tried to simplify it from the original TodoMVC Knockout code so it should be a little more understandable if you’re new to Knockout.

Next we’re going to change the HTML to match our updated JavaScript. Open index.cshtml and replace the entire contents of the <body> tag with the following:

    <section id="todoapp">
        <header id="header">
            <h1>todos</h1>
            <input id="new-todo" data-bind="value: current, valueUpdate: 'afterkeydown', event: { keypress: add }" placeholder="What needs to be done?" autofocus>
        </header>
        <section id="main" data-bind="visible: todos().length">
            <input id="toggle-all" data-bind="checked: allCompleted" type="checkbox">
            <label for="toggle-all">Mark all as complete</label>
            <ul id="todo-list" data-bind="foreach: filteredTodos">
                <li data-bind="css: { completed: completed, editing: editing }">
                    <div class="view">
                        <input class="toggle" data-bind="checked: completed" type="checkbox">
                        <label data-bind="text: title, event: { dblclick: startEdit }"></label>
                        <button class="destroy" data-bind="click: $root.remove"></button>
                    </div>
                    <input class="edit" data-bind="value: title, valueUpdate: 'afterkeydown', event: { keypress: stopEdit }">
                </li>
            </ul>
        </section>
        <footer id="footer" data-bind="visible: completedCount() || remainingCount()">
            <span id="todo-count">
                <strong data-bind="text: remainingCount">0</strong> item(s) left
            </span>
            <ul id="filters">
                <li>
                    <a data-bind="css: { selected: showMode() == 'all' }, click: function(){showMode('all');}">All</a>
                </li>
                <li>
                    <a data-bind="css: { selected: showMode() == 'active' }, click: function(){showMode('active');}">Active</a>
                </li>
                <li>
                    <a data-bind="css: { selected: showMode() == 'completed' }, click: function(){showMode('completed');}">Completed</a>
                </li>
            </ul>
            <button id="clear-completed" data-bind="visible: completedCount, click: removeCompleted">
                Clear completed (<span data-bind="text: completedCount"></span>)
            </button>
        </footer>
    </section>
    <footer id="info">
        <p>Double-click to edit a todo</p>
        <p>Written by <a href="https://github.com/ashish01/knockoutjs-todos">Ashish Sharma</a> and <a href="http://knockmeout.net">Ryan Niemeyer</a></p>
        <p>Part of <a href="http://todomvc.com">TodoMVC</a></p>
    </footer>
    <script src="Scripts/bower_components/todomvc-common/base.js"></script>
    <script src="Scripts/bower_components/knockout.js/knockout.debug.js"></script>
    <script src="Scripts/jquery-1.10.2.js"></script>
    <script src="Scripts/js/app.js"></script>
        
    <script>
        var viewModel = new window.TodoApp.ViewModel(spHostUrl);
        ko.applyBindings(viewModel);
    </script>

Again, this is slightly simplified from the original TodoMVC code.

You should now be able to run the app as before. In the next step we’ll add the client-side code required to load the data back from the server. Update the final script block in index.cshtml to the following:

        
    <script>
        var model = @(Html.Raw(Json.Encode(Model)));
        var viewModel = new window.TodoApp.ViewModel();

        for(var i=0; i<model.length; i++) {
            viewModel.addTodo(new window.TodoApp.Todo(model[i].Id, model[i].Title, model[i].Completed));
        }

        ko.applyBindings(viewModel);
    </script>

    

Now we’re ready to give the app a quick test. Hit F5 and wait for you app to open. Since we don’t yet have a way of saving data, we’re going to add some directly into the SharePoint list. Just browse to the list using the url /[YourSPsite]/TodoApp/Lists/TodoList/ and you should see the standard List UI:

Add a couple of test items and refresh your app:

Saving Data via AJAX

The final piece of the puzzle is to react to user input and save the data back to the server. When an item is deleted, we’ll need to know the Id of the SharePoint ListItem which should be deleted. Therefore, we’ll add an Id property to the TodoItemViewModel class:

    public class TodoItemViewModel
    {
        public int Id { get; set; } //New!
        public string Title { get; set; }
        public bool Completed { get; set; }
    }
    

Don’t forget to set the Id property when we’re loading the items in the HomeController Index method:

Now we’ll create web service methods for adding, deleting and updating an item. Add these methods to HomeController:

    [HttpPost]
    public JsonResult AddItem(string title)
    {
        int newItemId = 0;
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if (clientContext != null)
            {
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItem listItem = list.AddItem(new ListItemCreationInformation());
                listItem["Title"] = title;
                listItem.Update();
                clientContext.Load(listItem, li => li.Id);
                clientContext.ExecuteQuery();
                newItemId = listItem.Id;
            }
        }
        return Json(newItemId, JsonRequestBehavior.AllowGet);
    }

    [HttpPost]
    public JsonResult RemoveItem(int id)
    {
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if (clientContext != null)
            {
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItem item = list.GetItemById(id);
                item.DeleteObject();
                clientContext.ExecuteQuery();
            }
        }
        return new JsonResult();
    }

    [HttpPost]
    public JsonResult UpdateItem(int id, string title, bool completed)
    {
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if (clientContext != null)
            {
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItem item = list.GetItemById(id);
                item["Title"] = title;
                item["Completed"] = completed;
                item.Update();
                clientContext.ExecuteQuery();
            }
        }
        return new JsonResult();
    }
    

The above methods use fairly simple CSOM code to add, remove and update a list item. You can accomplish the same goal by using the JavaScript version of the CSOM which would have the added benefit of calling SharePoint directly without going via the app server. However, I haven’t done this because I want to illustrate how to make AJAX calls between the app client and server. click here to read more about the javascript CSOM library.

Now we’ll add the JavaScript code to call these server methods when an add, update or delete occurs. Earlier you may have noticed that we included a reference to the jQuery script. This is because we’re going to use jQuery’s AJAX helper methods:

Now all we need to do is add code to react to the add, remove and update events, and call the server methods. The first step is to open HomeController.cs and add the SPHostUrl, which is a crucial value for authentication, to the ViewBag:

The purpose of this is so that we can access SPHostUrl on the client, and pass it back to the server during AJAX requests. The authentication cookie will also be passed, and together this forms everything required for authentication to take place.

Next, open up the Index.cshtml file and update the last script block to match the following:

    <script>
        var model = @(Html.Raw(Json.Encode(Model)));
        
        var spHostUrl = '@ViewBag.SPHostUrl'; //<-- Add this line
        var viewModel = new window.TodoApp.ViewModel(spHostUrl); <-- Pass SPHostUrl into the ViewModel

        for(var i=0; i<model.length; i++) {        
            viewModel.addTodo(new window.TodoApp.Todo(model[i].Id, model[i].Title, model[i].Completed));
        }
        ko.applyBindings(viewModel);
    </script>
    

Next we’ll update the app.js code to call the web services at appropriate times by adding HTTP requests using jQuery’s $.ajax helper. Firstly, update the add function:

    this.add = function (data, event) {
        if (event.keyCode == 13) {
            var current = me.current().trim();
            if (current) {
                var todo = new window.TodoApp.Todo(0, current);
                me.addTodo(todo);
                me.current('');

                $.ajax({
                    type: 'POST',
                    url: "/Home/AddItem?SPHostUrl=" + encodeURIComponent(spHostUrl),
                    contentType: "application/json; charset=utf-8",
                    data: JSON.stringify({
                        title: todo.title()
                    }),
                    dataType: "json",
                    success: function (id) {
                        todo.id(id);
                    }
                });
            }
        }
        return true;
    };
    

Next up is the remove function:

    this.remove = function (todo) {
        me.todos.remove(todo);
        $.ajax({
            type: 'POST',
            url: "/Home/RemoveItem?SPHostUrl=" + encodeURIComponent(spHostUrl),
            contentType: "application/json; charset=utf-8",
            data: JSON.stringify({
                id: todo.id()
            }),
            dataType: "json"
        });
    };
    

Now that additions and deletions are being handled, it only remains to handle updates. We will do this by adding code within the addTodo function that listens for changes to the title or completed properties, and submits a change to the server. The updated addTodo function should look like this:

    this.addTodo = function (todo) {
        me.todos.push(todo);
        var adding = true;
        ko.computed(function () {
            var title = todo.title(),
                completed = todo.completed();
            if (!adding) {
                $.ajax({
                    type: 'POST',
                    url: "/Home/UpdateItem?SPHostUrl=" + encodeURIComponent(spHostUrl),
                    contentType: "application/json; charset=utf-8",
                    data: JSON.stringify({
                        id: todo.id(),
                        title: title,
                        completed: completed
                    }),
                    dataType: "json"
                });
            }
        });
        adding = false;
    }
    

In the code above, we add a Knockout computed property which listens to the title and completed observable properties. If either value changes, we make a call to the UpdateItem web service. Note that we use a flag called adding to ensure we don’t call UpdateItems during the initial item addition.

Run the project again. If all has gone to plan, you should hopefully be able to insert, edit and delete items and have them immediately saved back to SharePoint.

Tidying Up

It’s time to do some tidy up to make sure we’re following MVC conventions correctly. Firstly, we should be using the script loader to ensure the JavaScript is loaded as efficiently as possible. Open up App_Start\BundleConfig.cs and replace its contents with the following:

    public static void RegisterBundles(BundleCollection bundles)
    {
        bundles.Add(new ScriptBundle("~/bundles/scripts").Include(
                    "~/Scripts/bower_components/todomvc-common/base.js",
                    "~/Scripts/bower_components/knockout.js/knockout.debug.js",
                    "~/Scripts/jquery-{version}.js",
                    "~/Scripts/js/app.js"));

        bundles.Add(new StyleBundle("~/Content/css").Include(
                    "~/Scripts/bower_components/todomvc-common/base.css"));
    }

This creates two bundles: a CSS bundle and a JavaScript bundle. The advantage of this is that in production, your scripts will be loaded in a single request, and can be easily minified. Update your Index.cshtml file by removing existing script and style references and replace them by referencing your bundles in the <head> tag:

    <head>
        <meta charset="utf-8">
        <title>Knockout.js TodoMVC</title>
        @Styles.Render("~/Content/css")
        @Scripts.Render("~/bundles/scripts")
    </head>
    

Since your app doesn’t use the About or Contact pages which were included in the default project template, you can remove those:

Also, remove the associated methods within the HomeController class.

Adding an App Part (WebPart)

Adding a web part is really easy! You can create an app part to point to any page in your app, and it simply displays in an iframe inside your SharePoint site. In practice you’ll probably want to create a new page. We’re going to create a page that differs slightly in style from the app.

Now, this view will be identical to the main Index view except for styling. We want to avoid copy-pasting the original view into this one, so we’ll created a Shared view that we can use for both.

Start by adding a new View by right-clicking the Views/Home folder and selecting Add -> View: and name it IndexCommon:

Now copy the entire contents of the body tag from Index to IndexCommon. Then you can reference your IndexCommon from Index using a call to @Html.Partial. Your Index.cshtml should look as follows:

    <!doctype html>
    <html lang="en" data-framework="knockoutjs">
    <head>
        <meta charset="utf-8">
        <title>Knockout.js TodoMVC</title>
        @Styles.Render("~/Content/css")
        @Scripts.Render("~/bundles/scripts")
    </head>
    <body>
        @Html.Partial("IndexCommon")
    </body>
    </html>
    

Add a new view called IndexTrimmed:

Again, use the same content for Index. This time however we’re going to make one change, which is to reference an alternative CSS file. So change the @Styles.Render tag as follows:

Now open App_Start/BundleConfig.cs and add a new bundle:

    
    bundles.Add(new StyleBundle("~/Content/css-trimmed").Include(
                "~/Scripts/bower_components/todomvc-common/base-trimmed.css"));
    

And add the css file by copying base.css to base-trimmed.css:

You can make modifications to this CSS file at this point. I deleted the following rules:

  • #todoapp { margin: 130px 0 40px 0; } (the large title at the top)
  • body { margin: 0 auto; } (this caused page centering)
  • background: #eaeaea url(‘bg.png’); (grey background image)

I added these rules:

  • #info { display:none; } (to hide the info footer)

The next step is to add a Controller method for our new View. Open HomeController and add a method called IndexTrimmed. It should hold the exact same content as Index, so I’ve extracted it to a common method:

    [SharePointContextFilter]
    public ActionResult IndexTrimmed()
    {
        return View(GetItems()); //Pass the items into the view
    }

    [SharePointContextFilter]
    public ActionResult Index()
    {
        return View(GetItems()); //Pass the items into the view
    }

    private List<TodoItemViewModel> GetItems()
    {
        List<TodoItemViewModel> result = new List<TodoItemViewModel>();
        var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);
        ViewBag.SPHostUrl = spContext.SPHostUrl;
        using (var clientContext = spContext.CreateAppOnlyClientContextForSPAppWeb())
        {
            if (clientContext != null)
            {
                //Load list items
                List list = clientContext.Web.Lists.GetByTitle("TodoList");
                ListItemCollection items = list.GetItems(CamlQuery.CreateAllItemsQuery());
                clientContext.Load(items);
                clientContext.ExecuteQuery();
                //Create the Todo item view models
                result = items.ToArray().Select(li => new TodoItemViewModel()
                {
                    Id = (int)li.Id,
                    Title = (string)li["Title"],
                    Completed = (bool)li["Completed"]
                }).ToList();
            }
        }
        return result;
    }

Now we’re ready to add the Web Part. Right click on TodoApp in solution explorer, and select Add -> New Item. Choose Client Web Part:

Give it a name and click Next:. We want to use our new page, so enter its url:

Open the TodoApp/TodoWebPart/Elements.xml file and change the default width from 300px to 600px:

Now run your app again. Make sure the entire app reinstalls, since the app part is installed to the SharePoint server itself. Add the app part by visiting your SharePoint site and clicking Page, then Edit:

Then select Insert -> App Part, and choose the TodoWebPart from the list. Click Add.

Click Save to save your changes to the page, and your web part should be shown:

Making Your Web Part Resize Dynamically

Now, you’ll notice that the Web Part isn’t resizing correctly when you add items to the list. This is because as the app itself gets larger, the App Part doesn’t get larger – it is after all an iframe with a fixed height. This isn’t a problem for apps that don’t resize, for example forms or fixed-length lists. However, we want our app to resize as though it were a normal part of the main website flow.

Luckily, there’s a workaround for this involving posting a message to the iframe and asking it to resize.

Add the following code to your HomeController, in the IndexTrimmed method:

    public ActionResult IndexTrimmed()
    {
        ViewBag.SenderId = HttpContext.Request.Params["SenderId"];
        return View(GetItems());
    }
    

The purpose of the above code is to retrieve the SenderId parameter from the URL, and make it available (via the ViewBag) to the client. This SenderId parameter represents the ID of the iframe which is hosting the app part.

Next, we’ll use the Sender ID in the client to request a resize of the iframe whenever the app resizes – more specifically, whenever an item is added to or removed from the list. This script should be added to the end of the body in IndexTrimmed.cshtml:

    //Retrieve the sender ID from the viewbag
    var senderId = '@ViewBag.SenderId';

    //Create a function that will change the height of the iframe
    function setHeight() {
        var width = $('#todoapp').outerWidth(true),
            height = $('#todoapp').outerHeight(true);

        //Notify SP that it should resize its iframe to the appropriate height.
        window.parent.postMessage('<message senderId=' + senderId + '>resize(' + width + ', ' + (height + 50) + ')</message>', "*");
    }

    //List for changes to the todo list, and call setHeight when that happens
    viewModel.todos.subscribe(setHeight);

    //Set the correct initial height when the app first loads.
    setHeight();
    

For the avoidance of doubt, here’s where this script should go:

The script is intentionally placed below the call to load the IndexCommon partial view, because then it will have access to the JavaScript viewModel object.

Packaging your app

With autohosted apps, the process is simpler: you just right-click on your app and select Publish. From there, you click Package and you are given a .app file. This app file contains everything involved: both the SharePoint app and also the remove app (your Web project). It’s super simple because the autohosting process takes care of creating a Client ID and Client Secret (for OAuth authentiction), and also takes care of physical hosting.

With Provider-hosted apps, things are more complex, and the steps you follow depend on how you want to host your web project. Your first step will be to visit this page and create a Client ID and Client Secret by registering your app. Once you’ve done that, you can right-click your app project and click Package. Then follow the publish wizard which will involve entering your Client ID and Client Secret.

Note that with your provider hosted app, you will deploy your Web project separately from your App project. This is obvious really: you’re going to be hosting your Web project yourself (you’re the provider, after all), and the small app project will be packaged up and listed on the Office Store or sent manually to a customer. Once they install it, it’ll just point to your web server where your web app is installed. To configure this properly, open AppManifest.xml in the App project, and enter the URL to where your app is installed:

How To : Using the Proxy Pattern to implement Code Access Security

There are many ways to secure different parts of your application. The security of running code in .NET revolves around the concept of Code Access Security (CAS).

CAS

CAS determines the trustworthiness of an assembly based upon its origin and the characteristics of the assembly itself, such as its hash value. For example, code installed locally on the machine is more trusted than code downloaded from the Internet. The runtime will also validate an assembly’s metadata and type safety before that code is allowed to run.

 

There are many ways to write secure code and protect data using the .NET Framework. In this chapter, we explore such things as controlling access to types, encryption and decryption, random numbers, securely storing data, and using programmatic and declarative security.

 

Controlling Access to Types in a Local Assembly

 

Problem

You have an existing class that contains sensitive data, and you do not want clients to have direct access to any objects of this class. Instead, you want an intermediary object to talk to the clients and to allow access to sensitive data based on the client’s credentials. What’s more, you would also like to have specific queries and modifications to the sensitive data tracked, so that if an attacker manages to access the object, you will have a log of what the attacker was attempting to do.

Solution

Use the proxy design pattern to allow clients to talk directly to a proxy object. This proxy object will act as gatekeeper to the class that contains the sensitive data. To keep malicious users from accessing the class itself, make it private, which will at least keep code without the ReflectionPermissionFlag. MemberAccess access (which is currently given only in fully trusted code scenarios such as executing code interactively on a local machine) from getting at it.

The namespaces we will be using are:

 
 
  using System;
    using System.IO;
    using System.Security;
    using System.Security.Permissions;
    using System.Security.Principal;

Let’s start this design by creating an interface, as shown in Example 17-1, “ICompanyData interface”, that will be common to both the proxy objects and the object that contains sensitive data.

Example 17-1. ICompanyData interface

 
 
internal interface ICompanyData
{
    string AdminUserName
    {
        get;
        set;
    }

    string AdminPwd
    {
        get;
        set;
    }
    string CEOPhoneNumExt
    {
        get;
        set;
    }
    void RefreshData();
    void SaveNewData();
}

The CompanyData class shown in Example 17-2, “CompanyData class” is the underlying object that is “expensive” to create.

Example 17-2. CompanyData class

 
 
internal class CompanyData : ICompanyData
{
    public CompanyData()
    {
        Console.WriteLine("[CONCRETE] CompanyData Created");
        // Perform expensive initialization here.
        this.AdminUserName ="admin";
        this.AdminPd ="password";

        this.CEOPhoneNumExt ="0000";
    }

    public str ing AdminUserName
    {
        get;
        set;
    }

    public string AdminPwd
    {
        get;
        set;
    }

    public string CEOPhoneNumExt
    {
        get;
        set;
    }

    public void RefreshData()
    {
        Console.WriteLine("[CONCRETE] Data Refreshed");
    }

    public void SaveNewData()
    {
        Console.WriteLine("[CONCRETE] Data Saved");
    }
}

The code shown in Example 17-3, “CompanyDataSecProxy security proxy class” for the security proxy class checks the caller’s permissions to determine whether the CompanyData object should be created and its methods or properties called.

Example 17-3. CompanyDataSecProxy security proxy class

 
 
public class CompanyDataSecProxy : ICompanyData
{
    public CompanyDataSecProxy()
    {
        Console.WriteLine("[SECPROXY] Created");

        // Must set principal policy first.
        appdomain.CurrentDomain.SetPrincipalPolicy(PrincipalPolicy.
           WindowsPrincipal);
    }

    private ICompanyData coData = null;
    private PrincipalPermission admPerm =
        new PrincipalPermission(null, @"BUILTIN\Administrators", true);
    private PrincipalPermission guestPerm =
        new Pr incipalPermission(null, @"BUILTIN\Guest", true);
    private PrincipalPermission powerPerm =
        new PrincipalPermission(null, @"BUILTIN\PowerUser", true);
    private PrincipalPermission userPerm =
        new PrincipalPermission(null, @"BUILTIN\User", true);

    public string AdminUserName
    {
        get
        {
            string userName = ";
            try
            {
                admPerm.Demand();
                Startup();
                userName =coData.AdminUserName;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminUserName_get failed! {0}",e.ToString());
            }
            return (userName);
        }
        set
            {
            try
            {
                admPerm.Demand();
                Startup();
                coData.AdminUserName = value;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminUserName_set failed! {0}",e.ToString());
            }
        }
    }

    public string AdminPwd
    {
        get
        {
            string pwd = ";
            try
            {
                admPerm.Demand();
                Startup();
                pwd = coData.AdminPwd;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminPwd_get Failed! {0}",e.ToString());
            }

            return (pwd);
        }
        set
        {
            try
            {
                admPerm.Demand();
                Startup();
                coData.AdminPwd = value;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("AdminPwd_set Failed! {0}",e.ToString());
            }
        }
    }

    public string CEOPhoneNumExt
    {
        get
        {
            string ceoPhoneNum = ";
            try
            {
                admPerm.Union(powerPerm).Demand();
                Startup();
                ceoPhoneNum = coData.CEOPhoneNumExt;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("CEOPhoneNum_set Failed! {0}",e.ToString());
            }
            return (ceoPhoneNum);
        }
        set
        {
            try
            {
                admPerm.Demand();
                Startup();
                coData.CEOPhoneNumExt = value;
            }
            catch(SecurityException e)
            {
                Console.WriteLine("CEOPhoneNum_set Failed! {0}",e.ToString());
            }
        }
    }
    public void RefreshData()
    {
        try
        {
            admPerm.Union(powerPerm.Union(userPerm)).Dem and();
            Startup();
            Console.WriteLine("[SECPROXY] Data Refreshed");
            coData.RefreshData();
        }
        catch(SecurityException e)
        {
            Console.WriteLine("RefreshData Failed! {0}",e.ToString());
        }
    }

    public void SaveNewData()
    {
        try
        {
            admPerm.Union(powerPerm).Demand();
            Startup();
            Console.WriteLine("[SECPROXY] Data Saved");
            coData.SaveNewData();
        }
        catch(SecurityException e)
        {
            Console.WriteLine("SaveNewData Failed! {0}",e.ToString());
        }
    }

    // DO NOT forget to use [#define DOTRACE] to control the tracing proxy.
    private void Startup()
    {
        if (coData == null)
        {
#if (DOTRACE)
            coData = new CompanyDataTraceProxy();
#else
            coData = new CompanyData();
#endif
            Console.WriteLine("[SECPROXY] Refresh Data");
            coData.RefreshData();
        }
    }
}

When creating thePrincipalPermissions as part of the object construction, you are using string representations of the built-in objects (“BUILTIN\Administrators”) to set up the principal role. However, the names of these objects may be different depending on the locale the code runs under. It would be appropriate to use the WindowsAccountType.Administrator enumeration value to ease localization because this value is defined to represent the administrator role as well. We used text here to clarify what was being done and also to access the PowerUsers role, which is not available through the WindowsAccountType enumeration.

If the call to the CompanyData object passes through the CompanyDataSecProxy, then the user has permissions to access the underlying data. Any access to this data may be logged, so the administrator can check for any attempt to hack the CompanyData object. The code shown in Example 17-4, “CompanyDataTraceProxy tracing proxy class” is the tracing proxy used to log access to the various method and property access points in the CompanyData object (note that the CompanyDataSecProxy contains the code to turn this proxy object on or off).

Example 17-4. CompanyDataTraceProxy tracing proxy class

 
 
public class CompanyDataTraceProxy : ICompanyData
{    
    public CompanyDataTraceProxy() 
    {
        Console.WriteLine("[TRACEPROXY] Created");
        string path = Path.GetTempPath() + @"\CompanyAccessTraceFile.txt";
        fileStream = new FileStream(path, FileMode.Append,
            FileAccess.Write, FileShare.None);
        traceWriter = new StreamWriter(fileStream);
        coData = new CompanyData();
    }

    private ICompanyData coData = null;
    private FileStream fileStream = null;
    private StreamWriter traceWriter = null;

    public string AdminPwd
    {
        get
        {
            traceWriter.WriteLine("AdminPwd read by user.");
            traceWriter.Flush();
            return (coData.AdminPwd);
        }
        set
        {
            traceWriter.WriteLine("AdminPwd written by user.");
            traceWriter.Flush();
            coData.AdminPwd = value;
        }
    }

    public string AdminUserName
    {
        get
        {
            traceWriter.WriteLine("AdminUserName read by user.");
            traceWriter.Flush();
            return (coData.AdminUserName);
        }
        set
        {
            traceWriter.WriteLine("AdminUserName written by user.");
            traceWriter.Flush(); 
            coData.AdminUserName = value;
        }
    }

    public string CEOPhoneNumExt
    {
        get
        {
            traceWriter.WriteLine("CEOPhoneNumExt read by user.");
            traceWriter.Flush();
            return (coData.CEOPhoneNumExt);
        }
        set
        {
            traceWriter.WriteLine("CEOPhoneNumExt written by user.");
            traceWriter.Flush();
            coData.CEOPhoneNumExt = value;
        }
    }

    public void RefreshData()
    {
        Console.WriteLine("[TRACEPROXY] Refresh Data");
        coData.RefreshData();
    }

    public void SaveNewData()
    {
        Console.WriteLine("[TRACEPROXY] Save Data");
        coData.SaveNewData();
    }
}

The proxy is used in the following manner:

 
 
   // Create the security proxy here.
    CompanyDataSecProxy companyDataSecProxy = new CompanyDataSecProxy( );

    // Read some data.
    Console.WriteLine("CEOPhoneNumExt: " + companyDataSecProxy.CEOPhoneNumExt);

    // Write some data.
    companyDataSecProxy.AdminPwd = "asdf";
    companyDataSecProxy.AdminUserName = "asdf";

    // Save and refresh this data.
    companyDataSecProxy.SaveNewData( );
    companyDataSecProxy.RefreshData( );

Note that as long as the CompanyData object was accessible, you could have also written this to access the object directly:

 
 
    // Instantiate the CompanyData object directly without a proxy.
    CompanyData companyData = new CompanyData( );

    // Read some data.
    Console.WriteLine("CEOPhoneNumExt: " + companyData.CEOPhoneNumExt);

    // Write some data.
    companyData.AdminPwd = "asdf";
    companyData.AdminUserName = "asdf";

    // Save and refresh this data.
    companyData.SaveNewData();
    companyData.RefreshData();

If these two blocks of code are run, the same fundamental actions occur: data is read, data is written, and data is updated/refreshed. This shows you that your proxy objects are set up correctly and function as they should.

Discussion

The proxy design pattern is useful for several tasks. The most notable-in COM, COM+, and .NET remoting-is for marshaling data across boundaries such as AppDomains or even across a network. To the client, a proxy looks and acts exactly the same as its underlying object; fundamentally, the proxy object is just a wrapper around the object.

 

A proxy can test the security and/or identity permissions of the caller before the underlying object is created or accessed. Proxy objects can also be chained together to form several layers around an underlying object. Each proxy can be added or removed depending on the circumstances.

 

For the proxy object to look and act the same as its underlying object, both should implement the same interface. The implementation in this recipe uses an ICompanyData interface on both the proxies (CompanyDataSecProxy and CompanyDataTraceProxy) and the underlying object (CompanyData). If more proxies are created, they, too, need to implement this interface.

 

The CompanyData class represents an expensive object to create. In addition, this class contains a mixture of sensitive and nonsensitive data that requires permission checks to be made before the data is accessed. For this recipe, the CompanyData class simply contains a group of properties to access company data and two methods for updating and refreshing this data. You can replace this class with one of your own and create a corresponding interface that both the class and its proxies implement.

 

The CompanyDataSecProxy object is the object that a client must interact with. This object is responsible for determining whether the client has the correct privileges to access the method or property that it is calling. The get accessor of the AdminUserName property shows the structure of the code throughout most of this class:

 
 
  public string AdminUserName
    {
        get
        {
            string userName = ";
            try
            {
                admPerm.Demand( );
                Startup( );
                userName = coData.AdminUserName;
            }
            catch(SecurityException e)
            {
               Console.WriteLine("AdminUserName_get ailed!: {0}",e.ToString( ));
            }
            return (userName);
        }
        set
        {
            try
            {
                admPerm.Demand( );
                Startup( );
                coData.AdminUserName = value;
            }
            catch(SecurityException e)
            {
               Console.WriteLine("AdminUserName_set Failed! {0}",e.ToString( ));
            }
        }
    }

Initially, a single permission (AdmPerm) is demanded. If this demand fails, a SecurityException, which is handled by the catch clause, is thrown. (Other exceptions will be handed back to the caller.) If the Demand succeeds, the Startup method is called. It is in charge of instantiating either the next proxy object in the chain (CompanyDataTraceProxy) or the underlying CompanyData object. The choice depends on whether the DOTRACE preprocessor symbol has been defined. You may use a different technique, such as a registry key to turn tracing on or off, if you wish.

 

This proxy class uses the private field coData to hold a reference to an ICompanyData type, which can be either a CompanyDataTraceProxy or the CompanyData object. This reference allows you to chain several proxies together.

In the marketSenior C# & SharePoint Developer in the market

10 years experience in various industries

BSC Degree in Computer Science (Cum Laude)

tomas.floyd@outlook.com

https://sharepointsamurai.wordpress.com

 

The CompanyDataTraceProxy simply logs any access to the CompanyData object’s information to a text file. Since this proxy will not attempt to prevent a client from accessing the CompanyData object, the CompanyData object is created and explicitly called in each property and method of this object.

How To : Use JavaScript: Error Handling to Build More Efficient Windows Store Apps

winjs_life_02[1]

Believe it or not, sometimes app developers write code that doesn’t work. Or the code works but is terribly inefficient and hogs memory. Worse yet, inefficient code results in a poor UX, driving users crazy and compelling them to uninstall the app and leave bad reviews.

I’m going to explore common performance and efficiency problems you might encounter while building Windows Store apps with JavaScript. In this article, I take a look at best practices for error handling using the Windows Library for JavaScript (WinJS). In a future article, I’ll discuss techniques for doing work without blocking the UI thread, specifically using Web Workers or the new WinJS.Utilities.Scheduler API in WinJS 2.0, as found in Windows 8.1. I’ll also present the new predictable-object lifecycle model in WinJS 2.0, focusing particularly on when and how to dispose of controls.

For each subject area, I focus on three things:

  • Errors or inefficiencies that might arise in a Windows Store app built using JavaScript.
  • Diagnostic tools for finding those errors and inefficiencies.
  • WinJS APIs, features and best practices that can ameliorate specific problems.

I provide some purposefully buggy code but, rest assured, I indicate in the code that something is or isn’t supposed to work.

I use Visual Studio 2013, Windows 8.1 and WinJS 2.0 for these demonstrations. Many of the diagnostic tools I use are provided in Visual Studio 2013. If you haven’t downloaded the most-recent versions of the tools, you can get them from the Windows Dev Center (bit.ly/K8nkk1). New diagnostic tools are released through Visual Studio updates, so be sure to check for updates periodically.

I assume significant familiarity with building Windows Store apps using JavaScript. If you’re relatively new to the platform, I suggest beginning with the basic “Hello World” example (bit.ly/vVbVHC) or, for more of a challenge, the Hilo sample for JavaScript (bit.ly/SgI0AA).

Setting up the Example

First, I create a new project in Visual Studio 2013 using the Navigation App template, which provides a good starting point for a basic multipage app. I also add a NavBar control (bit.ly/14vfvih) to the default.html page at the root of the solution, replacing the AppBar code the template provided. Because I want to demonstrate multiple concepts, diagnostic tools and programming techniques, I’ll add a new page to the app for each demonstration. This makes it much easier for me to navigate between all the test cases.

The complete HTML markup for the NavBar is shown in Figure 1. Copy and paste this code into your solution if you’re following along with the example.

Figure 1 The NavBar Control

<!-- The global navigation bar for the app. -->
<div id="navBar" data-win-control="WinJS.UI.NavBar">
  <div id="navContainer" 
       data-win-control="WinJS.UI.NavBarContainer">
    <div id="homeNav" 
      data-win-control="WinJS.UI.NavBarCommand"
      data-win-options="{
        location: '/pages/home/home.html',
        icon: 'home',
        label: 'Home page'
    }">
    </div>
    <div id="handlingErrors"
      data-win-control="WinJS.UI.NavBarCommand"
      data-win-options="{
        location: '/pages/handlingErrors/handlingErrors.html',
        icon: 'help',
        label: 'Handling errors'
    }">
    </div>
    <div id="chainedAsync"
      data-win-control="WinJS.UI.NavBarCommand"
      data-win-options="{
        location: '/pages/chainedAsync/chainedAsync.html',
        icon: 'link',
        label: 'Chained asynchronous calls'
    }">
    </div>
  </div>
</div>

For more information about building a navigation bar, check out some of the Modern Apps columns by Rachel Appel, such as the one at msdn.microsoft.com/magazine/dn342878.

You can run this project with just the navigation bar, except that clicking any of the navigation buttons will raise an exception in navigator.js. Later in this article, I’ll discuss how to handle errors that come up in navigator.js. For now, remember the app always starts on the home­page and you need to right-click the app to bring up the navigation bar.

Handling Errors

Obviously, the best way to avoid errors is to release apps that don’t raise errors. In a perfect world, every developer would write perfect code that never crashes and never raises an exception. That perfect world doesn’t exist.

As much as users prefer apps that are completely error-free, they are exceptionally good at finding new and creative ways to break apps—ways you never dreamed of. As a result, you need to incorporate robust error handling into your apps.

Errors in Windows Store apps built with JavaScript and HTML act just like errors in normal Web pages. When an error happens in a Document Object Model (DOM) object that allows for error handling (for example, the <script>, <style> or <img> elements), the onerror event for that element is raised. For errors in the JavaScript call stack, the error travels up the chain of calls until caught (in a try/catch block, for instance) or until it reaches the window object, raising the window.onerror event.

WinJS provides several layers of error-handling opportunities for your code in addition to what’s already provided to normal Web pages by the Microsoft Web Platform. At a fundamental level, any error not trapped in a try/catch block or the onError handler applied to a WinJS.Promise object (in a call to the then or done methods, for example) raises the WinJS.Application.onerror event. I’ll examine that shortly.

In practice, you can listen for errors at other levels in addition to Application.onerror. With WinJS and the templates provided by Visual Studio, you can also handle errors at the page-control level and at the navigation level. When an error is raised while the app is navigating to and loading a page, the error triggers the navigation-level error handling, then the page-level error handling, and finally the application-level error handling. You can cancel the error at the navigation level, but any event handlers applied to the page error handler will still be raised.

In this article, I’ll take a look at each layer of error handling, starting with the most important: the Application.onerror event.

Application-Level Error Handling

WinJS provides the WinJS.Application.onerror event (bit.ly/1cOctjC), your app’s most basic line of defense against errors. It picks up all errors caught by window.onerror.” It also catches promises that error out and any errors that occur in the process of managing app model events. Although you can apply an event handler to the window.onerror event in your app, you’re better off just using Application.onerror for a single queue of events to monitor.

Once the Application.onerror handler catches an error, you need to decide how to address it. There are several options:

  • For critical blocking errors, alert the user with a message dialog. A critical error is one that affects continued operation of the app and might require user input to proceed.
  • For informational and non-blocking errors (such as a failure to sync or obtain online data), alert the user with a flyout or an inline message.
  • For errors that don’t affect the UX, silently swallow the error.
  • In most cases, write the error to a tracelog (especially one that’s hooked up to an analytics engine) so you can acquire customer telemetry. For available analytics SDKs, visit the Windows services directory at services.windowsstore.com and click on Analytics (under “By service type”) in the list on the left.

For this example, I’ll stick with message dialogs. I open up default.js (/js/default.js) and add the code shown in Figure 2 inside the main anonymous function, below the handler for the app.oncheckpoint event.

Figure 2 Adding a Message Dialog

app.onerror = function (err) {
  var message = err.detail.errorMessage ||
    (err.detail.exception && err.detail.exception.message) ||
    "Indeterminate error";
  if (Windows.UI.Popups.MessageDialog) {
    var messageDialog =
      new Windows.UI.Popups.MessageDialog(
        message,
        "Something bad happened ...");
    messageDialog.showAsync();
    return true;
  }
}

In this example, the error event handler shows a message telling the user an error has occurred and what the error is. The event handler returns true to keep the message dialog open until the user dismisses it. (Returning true also informs the WWAHost.exe process that the error has been handled and it can continue.)

hero-for-hire_basic-layout_600Senior C# & SharePoint Developer with 10 year’s development experience

BSC degree in Computer Science and Information Systems

5 years experience in delivering SharePoint based solutions using OOB functionality and Custom Development

Extensive experience in
• Microsoft SharePoint platform, App Model (2010 & 2013)
• C# 2.0 – 4.5
• Advanced Workflow (Visual Studio, K2, Nintex)
• Development of Custom Web Parts
• Master Page Dev & Branding
• Integration of Back-end systems, including 3 SAP Projects, MS CRM, K2 BlackPearl,
Custom LOB Systems
• SQL Server (design,development, stored procedures, triggers)
• BCS, BDC – Implementing WCF, REST Services, Web Services
• SharePoint Excel Services, PowerPivot, Word Automation Services
• Custom Reports (MS SQL Reporting, Crystal Reports)
• Objected Oriented Programming and Patterns
• TFS 2010-2013
• Agile & SCRUM methodologies (ALM / SDLC)
• Microsoft Azure as database and hosting hybrid solutions
• Office 365 and SharePoint App Development

 

Now I’ll create some errors for this code to handle. I’ll create a custom error, throw the error and then catch it in the event handler. For this first example, I add a new folder named handling­Errors to the pages folder. In the folder, I add a new Page Control by right-clicking the project in Solution Explorer and selecting Add | New Item. When I add the handlingErrors Page Control to my project, Visual Studio provides three files in the handlingErrors folder (/pages/handlingErrors): handlingErrors.html, handling­Errors.js and handlingErrors.css.

I open up handlingErrors.html and add this simple markup inside the <section> tag of the body:

<!-- When clicked, this button raises a custom error. -->
<button id="throwError">Throw an error!</button>

Next, I open handlingErrors.js and add an event handler to the button in the ready method of the PageControl object, as shown in Figure 3. I’ve provided the entire PageControl definition in handlingErrors.js for context.

Figure 3 Definition of the handlingErrors PageControl

// For an introduction to the Page Control template, see the following documentation:
// http://go.microsoft.com/fwlink/?LinkId=232511
(function () {
  "use strict";
  WinJS.UI.Pages.define("/pages/handlingErrors/handlingErrors.html", {
    ready: function (element, options) {
      // ERROR: This code raises a custom error.
      throwError.addEventListener("click", function () {
        var newError = new WinJS.ErrorFromName("Custom error", 
          "I'm an error!");
        throw newError;
      });
    },
    unload: function () {
      // Respond to navigations away from this page.
      },
    updateLayout: function (element) {
      // Respond to changes in layout.
    }
  });
})();

Now I press F5 to run the sample, navigate to the handling­Errors page and click the “Throw an error!” button. (If you’re following along, you’ll see a dialog box from Visual Studio informing you an error has been raised. Click Continue to keep the sample running.) A message dialog then pops up with the error, as shown in Figure 4.

The Custom Error Displayed in a Message Dialog
Figure 4 The Custom Error Displayed in a Message Dialog

Custom Errors

The Application.onerror event has some expectations about the errors it handles. The best way to create a custom error is to use the WinJS.ErrorFromName object (bit.ly/1gDESJC). The object created exposes a standard interface for error handlers to parse.

To create your own custom error without using the ErrorFromName object, you need to implement a toString method that returns the message of the error.

Otherwise, when your custom error is raised, both the Visual Studio debugger and the message dialog show “[Object object].” They each call the toString method for the object, but because no such method is defined in the immediate object, it goes through the chain of prototype inheritance for a definition of toString. When it reaches the Object primitive type that does have a toString method, it calls that method (which just displays information about the object).

Page-Level Error Handling

The PageControl object in WinJS provides another layer of error handling for an app. WinJS will call the IPageControlMembers.error method when an error occurs while loading the page. After the page has loaded, however, the IPageControlMembers.error method errors are picked up by the Application.onerror event handler, ignoring the page’s error method.

I’ll add an error method to the PageControl that represents the handleErrors page. The error method writes to the JavaScript console in Visual Studio using WinJS.log. The logging functionality needs to be started up first, so I need to call WinJS.Utilities.startLog before I attempt to use that method. Also note that I check for the existence of the WinJS.log member before I actually call it.

The complete code for handleErrors.js (/pages/handleErrors/handleErrors.js) is shown in Figure 5.

Figure 5 The Complete handleErrors.js

(function () {
  "use strict";
  WinJS.UI.Pages.define("/pages/handlingErrors/handlingErrors.html", {
    ready: function (element, options) {
      // ERROR: This code raises a custom error.      
      throwError.addEventListener("click", function () {
        var newError = {
          message: "I'm an error!",
          toString: function () {
            return this.message;
          }
        };
        throw newError;
      })
    },
    error: function (err) {
      WinJS.Utilities.startLog({ type: "pageError", tags: "Page" });
      WinJS.log && WinJS.log(err.message, "Page", "pageError");
    },
    unload: function () {
      // TODO: Respond to navigations away from this page.
    },
    updateLayout: function (element) {
      // TODO: Respond to changes in layout.
    }
  });
})();

WinJS.log

The call to WinJS.Utilities.startLog shown in Figure 5 starts the WinJS.log helper function, which writes output to the JavaScript console by default. While this helps greatly during design time for debugging, it doesn’t allow you to capture error data after users have installed the app.

For apps that are ready to be published and deployed, you should consider creating your own implementation of WinJS.log that calls into an analytics engine. This allows you to collect telemetry data about your app’s performance so you can fix unforeseen bugs in future versions of your app. Just make sure customers are aware of the data collection and that you clearly list what data gets collected by the analytics engine in your app’s privacy statement.

Note that when you overwrite WinJS.log in this way, the WinJS.log function will catch all output that would otherwise go to the JavaScript console, including things like status updates from the Scheduler. This is why you need to pass a meaningful name and type value into the call to WinJS.Utilities.startLog so you can filter out any errors you don’t want.

Now I’ll try running the sample and clicking “Throw an error!” again. This results in the exact same behavior as before: Visual Studio picks up the error and then the Application.onerror event fires. The JavaScript console doesn’t show any messages related to the error because the error was raised after the page loaded. Thus, the error was picked up only by the Application.onerror event handler.

So why use the PageControl error handling? Well, it’s particularly helpful for catching and diagnosing errors in WinJS controls that are created declaratively in the HTML. For example, I’ll add the following HTML markup inside the <section> tags of handleErrors.html (/pages/handleErrors/handleErrors.html), below the button:

<!-- ERROR: AppBarCommands must be button elements by default
  unless specified otherwise by the 'type' property. -->
<div data-win-control="WinJS.UI.AppBarCommand"></div>

Now I press F5 to run the sample and navigate to the handleErrors page. Again, the message dialog appears until dismissed. However, the following message appears in the JavaScript console (you’ll need to switch back to the desktop to check this):

pageError: Page: Invalid argument: For a button, toggle, or flyout   command, the element must be null or a button element

Note that the app-level error handling appeared even though I handled the error in the PageControl (which logged the error). So how can I trap an error on a page without having it bubble up to the application?

The best way to trap a page-level error is to add error handling to the navigation code. I’ll demonstrate that next.

Navigation-Level Error Handling

When I ran the previous test where the app.on­error event handler handled the page-level error, the app seemed to stay on the homepage. Yet, for some reason, a Back button control appeared. When I clicked the Back button, it took me to a (disabled) handlingErrors.html page.

This is because the navigation code in navigator.js (/js/navigator.js), which is provided in the Navigation App project template, still attempts to navigate to the page even though the page has fizzled. Furthermore, it navigates back to the homepage and adds the error-prone page to the navigation history. That’s why I see the Back button on the homepage after I’ve attempted to navigate to handlingErrors.html.

To cancel the error in navigator.js, I replace the PageControl­Navigator._navigating function with the code in Figure 6. You see that the navigating function contains a call to WinJS.UI.Pages.render, which returns a Promise object. The render method attempts to create a new PageControl from the URI passed to it and insert it into a host element. Because the resulting PageControl contains an error, the returned promise errors out. To trap the error raised during navigation, I add an error handler to the onError parameter of the then method exposed by that Promise object. This effectively traps the error, preventing it from raising the Application.onerror event.

Figure 6 The PageControlNavigator._navigating Function in navigator.js

// Other PageControlNavigator code ...
// Responds to navigation by adding new pages to the DOM.
_navigating: function (args) {
  var newElement = this._createPageElement();
  this._element.appendChild(newElement);
  this._lastNavigationPromise.cancel();
  var that = this;
  this._lastNavigationPromise = WinJS.Promise.as().then(function () {
    return WinJS.UI.Pages.render(args.detail.location, newElement,
       args.detail.state);
  }).then(function parentElement(control) {
    var oldElement = that.pageElement;
    // Cleanup and remove previous element
    if (oldElement.winControl) {
      if (oldElement.winControl.unload) {
        oldElement.winControl.unload();
      }
      oldElement.winControl.dispose();
    }
    oldElement.parentNode.removeChild(oldElement);
    oldElement.innerText = "";
  },
  // Display any errors raised by a page control,
  // clear the backstack, and cancel the error.
  function (err) {
    var messageDialog =
      new Windows.UI.Popups.MessageDialog(
        err.message,
        "Sorry, can't navigate to that page.");
    messageDialog.showAsync()
    nav.history.backStack.pop();
    return true;
  });
  args.detail.setPromise(this._lastNavigationPromise);
},
// Other PageControlNavigator code ...

Promises in WinJS

Creating promises and chaining promises—and the best practices for doing so—have been covered in many other places, so I’ll skip that discussion in this article. If you need more information, check out the blog post by Kraig Brockschmidt at bit.ly/1cgMAnu or Appendix A in his free e-book, “Programming Windows Store Apps with HTML, CSS, and JavaScript, Second Edition” (bit.ly/1dZwW1k).

Note that it’s entirely proper to modify navigator.js. Although it’s provided by the Visual Studio project template, it’s part of your app’s code and can be modified however you need.

In the _navigating function, I’ve added an error handler to the final promise.then call. The error handler shows a message dialog—as with the application-level error handling—and then cancels the error by returning true. It also removes the page from the navigation history.

When I run the sample again and navigate to handlingErrors.html, I see the message dialog that informs me the navigation attempt has failed. The message dialog from the application-level error handling doesn’t appear.

Tracking Down Errors in Asynchronous Chains

When building apps in JavaScript, I frequently need to follow one asynchronous task with another, which I address by creating promise chains. Chained promises will continue moving along through the tasks, even if one of the promises in the chain returns an error. A best practice is to always end a chain of promises with a call to the done method. The done function throws any errors that would’ve been caught in the error handler for any previous then statements. This means I don’t need to define error functions for each promise in a chain.

Even so, tracking errors down can be difficult in very long chains once they’re trapped in the call to promise.done. Chained promises can include multiple tasks, and any one of them could fail. I could set a breakpoint in every task to see where the error pops up, but that would be terribly time-consuming.

Here’s where Visual Studio 2013 comes to the rescue. The Tasks window (introduced in Visual Studio 2010) has been upgraded to handle asynchronous JavaScript debugging as well. In the Tasks window you can view all active and completed tasks at any given point in your app code.

For this next example, I’ll add a new page to the solution to demonstrate this awesome tool. In the solution, I create a new folder called chainedAsync in the pages folder and then add a new Page Control named chainAsync.html (which creates /pages/­chainedAsync/chainedAsync.html and associated .js and .css files).

In chainedAsync.html, I insert the following markup within the <section> tags:

<!-- ERROR:Clicking this button starts a chain reaction with an error. -->
<p><button id="startChain">Start the error chain</button></p>
<p id="output"></p>

In chainedAsync.js, I add the event handler shown in Figure 7for the click event of the startChain button to the ready method for the page.

Figure 7 The Contents of the PageControl.ready Function in chainedAsync.js

startChain.addEventListener("click", function () {
  goodPromise().
    then(function () {
        return goodPromise();
    }).
    then(function () {
        return badPromise();
    }).
    then(function () {
        return goodPromise();
    }).
    done(function () {
        // This *shouldn't* get called
    },
      function (err) {
          document.getElementById('output').innerText = err.toString();
    });
});

Last, I define the functions goodPromise and badPromise, shown in Figure 8, within chainAsync.js so they’re available inside the PageControl’s methods.

Figure 8 The Definitions of the goodPromise and badPromise Functions in chainAsync.js

function goodPromise() {
  return new WinJS.Promise(function (comp, err, prog) {
    try {
      comp();
    } catch (ex) {
      err(ex)
    }
  });
}
// ERROR: This returns an errored-out promise.
function badPromise() {
  return WinJS.Promise.wrapError("I broke my promise :(");
}

I run the sample again, navigate to the “Chained asynchronous” page, and then click “Start the error chain.” After a short wait, the message “I broke my promise :(” appears below the button.

Now I need to track down where that error occurred and figure out how to fix it. (Obviously, in a contrived situation like this, I know exactly where the error occurred. For learning purposes, I’ll forget that badPromise injected the error into my chained promises.)

To figure out where the chained promises go awry, I’m going to place a breakpoint on the error handler defined in the call to done in the click handler for the startChain button, as shown in Figure 9.

The Position of the Breakpoint in chainedAsync.html
Figure 9 The Position of the Breakpoint in chainedAsync.html

I run the same test again, and when I return to Visual Studio, the program execution has stopped on the breakpoint. Next, I open the Tasks window (Debug | Windows | Tasks) to see what tasks are currently active. The results are shown in Figure 10.

The Tasks Window in Visual Studio 2013 Showing the Error
Figure 10 The Tasks Window in Visual Studio 2013 Showing the Error

At first, nothing in this window really stands out as having caused the error. The window lists five tasks, all of which are marked as active. As I take a closer look, however, I see that one of the active tasks is the Scheduler queuing up promise errors—and that looks promising (please excuse the bad pun).

(If you’re wondering about the Scheduler, I encourage you to read the next article in this series, where I’ll discuss the new Scheduler API in WinJS.)

When I hover my mouse over that row (ID 120 in Figure 10) in the Tasks window, I get a targeted view of the call stack for that task. I see several error handlers and, lo and behold, badPromise is near the beginning of that call stack. When I double-click that row, Visual Studio takes me right to the line of code in badPromise that raised the error. In a real-world scenario, I’d now diagnose why badPromise was raising an error.

WinJS provides several levels of error handling in an app, above and beyond the reliable try-catch-finally block. A well-performing app should use an appropriate degree of error handling to provide a smooth experience for users. In this article, I demonstrated how to incorporate app-level, page-level and navigation-level error handling into an app. I also demonstrated how to use some of the new tools in Visual Studio 2013 to track down errors in chained promises.

How To : Use Azure BizTalk Services to Integrate with an On-Premises SAP Server

biztalk_adapter_2004-1[1]hero-for-hire_basic-layout_600

Microsoft Azure BizTalk Services provides a rich set of integration capabilities enabling organizations to create hybrid solutions such that their customer or partner facing applications are hosted on Azure, while the data related to customers or partners is stored on-premises using LOB applications.

To demonstrate how to integrate applications with an on-premises LOB application using BizTalk Services, let us consider a scenario involving two business partners, Fabrikam and Contoso.

Business Scenario

Contoso sends a purchase order (PO) message to Fabrikam in an X12 Electronic Data Interchange (EDI) format using the PO (X12 850) schema. Fabrikam (that uses an SAP Server to manage partner data), accepts PO from its partners using the ORDERS05 IDOCS.

To enable Contoso to send a PO directly to Fabrikam’s on-premises SAP Server, Fabrikam decides to use Azure’s integration offering, BizTalk Services, to set up a hybrid integration scenario where the integration layer is hosted on and the SAP Server is within the organization’s firewall. Fabrikam uses BizTalk Services in the following ways to enable this hybrid integration scenario:

  1. Fabrikam uses Microsoft Azure BizTalk Services SDK to create a BizTalk Service project. The project includes a XML One-Way Bridge to send messages to a relay endpoint, which in turns sends the message to the on-premise SAP system.
  2. Fabrikam uses the BizTalk Adapter Service component available with BizTalk Services to expose the Send operation on ORDERS05 IDOC as an operation using Service Bus relay endpoint. The XML One-Way Bridge sends messages to this relay endpoint. Fabrikam also creates the schema for Send operation using BizTalk Adapter Service and includes the schema as part of the BizTalk Service project.
    noteNote
    A Send operation on an IDOC is an operation that is exposed by the BizTalk Adapter Pack on any IDOC to send the IDOC to the SAP Server. BizTalk Adapter Service uses BizTalk Adapter Pack to connect to an SAP Server.
  3. Fabrikam uses the Transform component available with BizTalk Services to create a map to transform the PO message in X12 format into the schema required by the SAP Server to invoke the Send operation on the ORDERS05 IDOC.
  4. Fabrikam uses the Microsoft Azure BizTalk Services Portal available with BizTalk Services to create and deploy an EDI agreement under the BizTalk Services subscription that processes the X12 850 PO message. As part of the message processing, the agreement also does the following:
    1. Receives an X12 850 PO message over FTP.
    2. Transforms the X12 PO message into the schema required by the SAP Server using the transform created earlier.
    3. Routes the transformed message to the XML One-Way Bridge that eventually routes the message to a relay endpoint created for sending a PO message to an SAP Server. Fabrikam earlier exposed (as explained in bullet 1 above) the Send operation on ORDERS05 IDOC as a relay endpoint, to enable partners to send PO messages using BizTalk Adapter Service.

Once this is set up, Contoso drops an X12 850 PO message to the FTP location. This message is consumed by the EDI receive pipeline, which processes the message, transforms it to an ORDERS05 IDOC, and routes it to the intermediary XML bridge. The bridge then routes the message to the relay endpoint on Service Bus, which is then sent to the on-premises SAP Server. The following illustration represents the same scenario.

SAP Integraiton scenario

How to Use This Article

 

This tutorial is written around a sample, SAPIntegration, which is available as part of the download (SAPIntegration.zip) from the MSDN Code Gallery. You could either use the SAPIntegration sample and go through this tutorial to understand how the sample was built or just use this tutorial to create your own application.

This tutorial is targeted towards the second approach so that you get to understand how this application was built. Also, to be consistent with the sample, the names of artifacts (e.g. schemas, transforms, etc.) used in this tutorial are same as that in the sample.

The sample available from the MSDN code gallery contains only half the solution, which can be developed at design-time on your computer. The sample cannot include the configuration that you must do on the BizTalk Services Portal on Azure.

For that, you must follow the steps in this tutorial to set up your EDI bridge. Even though Microsoft recommends that you follow the tutorial to best understand the concepts and procedures, if you really wish to use the sample, this is what you should do:

  • Download the SAPIntegration.zip package, extract the SAPIntegration sample and make relevant changes like providing your service namespace, issuer name, issuer key, SAP Server details, etc. After changing the sample, deploy the application to get the endpoint URL at which the XML One-Way Bridge is deployed.
  • Use the BizTalk Services Portal to configure the Receive settings as described at Step 5: Create and Deploy the EDI Receive Pipeline and follow the procedures to route messages from the EDI Receive bridge to the XML One-Way Bridge you already deployed.
  • Drop a test message at the FTP location configured as part of the agreement and verify that the application works as expected.
    • If the message is successfully processed, it will be routed to the SAP Server and you can verify the ORDERS IDOC using the SAP GUI.
    • If the EDI agreement fails to process the message, the failure/error messages are routed to a relay endpoint on Service Bus. To receive such messages, you must set up a relay receiver service that receives any message that comes to that specific relay endpoint. More details on why you need this service and how to use it are available at Step 6: Test the Solution.

Steps in the Solution :

 

  • Step 1: Set up Your Computer
  • Step 2: Expose a Relay Endpoint to Invoke Operations on ORDERS05 IDOC
  • Step 3: Transform the X12 850 PO Message to the ORDERS05 Message
  • Step 4: Create and Deploy the XML Bridge
  • Step 5: Create and Deploy the EDI Receive Pipeline
  • Step 6: Test the Solution

Step 1: Set up Your Computer


This topic provides you with instruction and pointers to set up your computer on which you will perform the steps to set up the hybrid integration scenario described in Tutorial: Using Azure BizTalk Services to Integrate with an On-Premises SAP Server. You must do the following to set up your computer:

  • Install Microsoft Azure BizTalk Services SDK. You can download the installer from http://go.microsoft.com/fwlink/?LinkId=235057. You use this SDK to configure and deploy the XML One-Way Bridge that sits between the EDI agreement and the relay endpoint.
  • Install BizTalk Adapter Service. You use this to expose the Send operation on an IDOC as a relay endpoint on Service Bus.You can download the installer from http://go.microsoft.com/fwlink/?LinkId=235057. Refer to the BizTalk Services installation guide at http://go.microsoft.com/fwlink/?LinkId=237092 to install the software prerequisites for BizTalk Services SDK and BizTalk Adapter Service.
  • Install the WCF LOB Adapter SDK. This is required for installing the Adapter Pack on the computer.
  • Install the Adapter Pack. This contains the SAP adapter that is required to establish connectivity to an SAP Server and for exposing SAP artifacts as operations.
  • Install the SAP client libraries. The SAP adapter needs these libraries to connect to an SAP Server. For information on where to install the SAP client libraries from, refer to the Adapter Pack installation guide (BizTalkAdapterPack_InstallationGuide.htm) at http://go.microsoft.com/fwlink/?LinkId=240161.
  • Download and extract the EDI message schemas (MicrosoftEdiXSDTemplates.zip) available at http://go.microsoft.com/fwlink/?LinkId=235057. This contains the X12 850 Purchase Order message schema that is required for the business scenario we use for this article.

After you have finished installing and downloading these components, you are ready to start setting up the business scenario.

Step 2: Expose a Relay Endpoint to Invoke Operations on ORDERS05 IDOC

This topic has not yet been rated Rate this topic

Updated: November 21, 2013

There are two main steps required to expose an SAP artifact as an operation that can be invoked by sending a message over Service Bus – create an LOB Target and an LOB Relay.

  • An LOB Target defines how an Azure application communicates to the Line-of-Business (LOB) system. The LOB Target controls the LOB system connection URI, the operation to perform, and the connection credentials.
  • An LOB Relay is a WCF service running within an organizations firewall and listens to a relay endpoint on the Service Bus. As the name suggests, the LOB Relay acts as a relay between the Service Bus relay endpoint and the LOB system. It receives the message at the Service Bus relay endpoint and passes it on to the relevant LOB system using the LOB Target configuration.

For more information, see BizTalk Adapter Service Architecture. In this topic, we will create an LOB Target and an LOB Relay to expose the Send operation on the ORDERS05 IDOC.

To create an LOB Target and LOB Relay

  1. Open Visual Studio (as an administrator), create a new BizTalk Service project, and name it SAPIntegration.
  2. You first start with adding a BizTalk Adapter Service server. This is the server where you installed the Runtime component of BizTalk Adapter Service. To add a BizTalk Adapter Service server, from the Server Explorer in Visual Studio, right-click BizTalk Adapter Services, and select Add BizTalk Adapter Service. In the Add BizTalk Adapter Service dialog box, enter the URL of the WCF service that monitors that Service Bus relay service, and then click OK.

    Add Service Bus Connect ServerBecause you have all the components of BizTalk Adapter Service installed on the same computer, the URL for that service will be http://localhost:8080/BAService/ManagementService.svc/.

    noteNote
    If you had installed BizTalk Adapter Service Runtime component on a separate computer, you would have replaced ‘localhost’ in the above URL with the name of that computer.
  3. In this tutorial we are creating an application to integrate with SAP, so we must add an SAP target. Expand the newly added server, expand LOB Types, right-click SAP, and select Add SAP Target.

    Add an SAP TargetThe Add a Target wizard starts. Perform the following steps to create an LOB Target.

    1. Read the information on the Before You Begin page, and then click Next.
    2. On the Connection Parameters page, specify the details for the SAP Server to connect to and the credentials to use for the connection. Click Next.
    3. On the Operations page, expand the ORDERSO5 IDOC category (under \IDOC\ORDERS\). There are several versions of the IDOC available. For this tutorial, we’ll select ORDERS05.V3(700). Expand this IDOC, select Send, and then click the right arrow to add it to the Selected Operations box.

      Add Send operation for IDOCClick Next.

    4. In the Runtime Security page, specify the security mechanism to be used by the LOB Server to authenticate the target resource when a message arrives from a client. For this tutorial, select Fixed Username and specify the credentials to connect to the SAP server.
    5. On the Deployment page, you create an LOB Relay and an LOB Target to provide connectivity to your on-premise LOB applications from the cloud.

      Select the Create new option to create a new relay and provide the following values:

      Name Description
      Namespace Specify the Service Bus namespace on which the LOB relay endpoint is created.
      Issuer name Specify the issuer name for the Service Bus namespace
      Issuer secret Specify the issuer secret for the Service Bus namespace
      Relay path Specify a name for the relay. For this tutorial, enter sapintegration01.
      Target sub-path Enter a sub-path to make this target unique. For this tutorial, enter orders.

      The Target runtime URL read-only property displays the URL where the relay is deployed on Service Bus. This is the path where you could send a message to be inserted into the on-premises SAP Server. In our scenario, this is where the bridge sends the message.

      Click Next.

    6. On the Summary page, review the values you specified in the previous steps, and then click Create.
    7. When the wizard completes, click Finish.

      In Visual Studio Server Explorer, you now have an entry under the SAP node. This represents the relay endpoint created in Service Bus to relay PO messages coming from the cloud to the on-premises SAP system.

To add schemas

  1. After adding the relay endpoint to an SAP system, you must add schemas that to send ORDERS05 PO messages to the SAP server. To add the schemas, right-click the relay endpoint and select Add schemas to SAPIntegration. In the dialog box, do the following:
    • Enter a filename prefix that will be included in the name of each schema file that is generated. For this tutorial, specify this as SAPIntegration_.
    • Enter a folder name that will be added to your solution under which all the schemas will be added. For this tutorial, specify the folder name as LOB Schemas.
    • Enter the credentials to connect to an SAP system.

    Add schemas to the projectClick OK. The schemas are added to the project under an LOB Schemas folder.

To use the LOB Target

  1. Right-click anywhere on the BizTalk Service project design surface, select Properties and update the BizTalk Service URL property to include your BizTalk Services name. This is the name that you provided in Azure Management Portal while provisioning the BizTalk Services.
  2. Set the security property for the relay endpoint.
    1. Right-click the LOB Target in Server Explorer and select Properties.
    2. In the Properties grid, click the ellipsis (…) against the Runtime Security property.
    3. In the Edit Security dialog box, select Fixed Username and specify username and password to connect to the SAP Server.
    4. Click OK.
  3. Drag and drop the LOB Target onto the design surface. Note the Entity Name property of the LOB Target. The default value is Relay-Path_target-sub-path. If using the examples above, it will be sapintegration01_orders.
  4. Open the .config file for the LOB Target, which typically has the naming convention as YourRelayPath_target-sub-path.config. Specify the Service Bus issuer name and issuer secret, as shown below:
      <sharedSecret issuerName="owner" issuerSecret="issuer_secret" />
    
    

    Save changes to the config file.

 

Step 3: Transform the X12 850 PO Message to the ORDERS05 Message


Both the X12 850 schemas and ORDERS05 schemas are pretty complex and require functional expertise in the respective domains to understand and create maps between the two schemas.

While you already generated the schema for ORDERS05 IDOC, you can get the schema for X12 PO message (X12_00401_850.xsd) from the MicrosoftEdiXSDTemplates.zip that you must have downloaded and extracted before. You must add the X12_00401_850.xsd schema as well to the SAPIntegration project.

Creating a transform between the X12 850 PO and ORDERS05 PO requires functional domain knowledge of both the X12 schema and the ORDERS05 schema.

Only then one can identify which field in the X12 schema maps to which field in the ORDERS05 schema. In this tutorial, we do not get into such details and instead use an existing transform (AzureTransformations.trfm) between these two schemas. This transform is available as part of the SAPIntegration project that you can download from the MSDN Code Gallery.

To include the transform in the BizTalk Service project, right-click the project name, click Add, click Existing Items, and then navigate to the location where you downloaded the SAPIntegration sample from the MSDN Code Gallery. Select the AzureTransformations.trfm and then click Add.

Step 4: Create and Deploy the XML Bridge


In this topic, you create an XML One-Way Bridge that will act as a connector between the EDI Receive bridge and the relay endpoint for the ORDERS05 IDOC in SAP. After configuring the bridge, you connect it to the SAP relay endpoint, and then deploy the solution.

To configure the XML Bridge

  1. In the SAPIntegration project, from the Solution Explorer, double-click the MessageFlowItinerary.bcs file to open the bridge configuration surface.
  2. Right-click anywhere on the BizTalk Service project design surface, select Properties, and update the BizTalk Service URL property to include your BizTalk Services name. This is the name that you provided in Azure Management Portal while provisioning the BizTalk Services.
  3. From the Toolbox, drag and drop the XML One-Way Bridge component to the bridge design surface.
  4. Right-click the XML One-Way Bridge, select Properties, and change the value for Entity Name and Relative Address properties to B2BConnector. As a result, the complete endpoint URL where the bridge is deployed, which is shown in the Runtime Address property, will resemble https://<mybiztalkservicename>.biztalk.windows.net/default/B2BConnector. This is where the EDI Receive bridge sends the ORDERS05 PO message.
  5. Double-click the XML One-Way Bridge to open the Bridge Configuration design surface. Because this bridge only routes the message from the EDI Receive bridge to the relay endpoint, there’s not much configuration required for each stage in the bridge stage other than specifying the message types of the message that this bridge routes. To specify the message type, on the XML One-Way Bridge design surface, within the Message Types box, click the add icon [ Add icon ] to open the Message Type Picker dialog box.
  6. In the Message Type Picker dialog box, from the Available message types box, select the schema for the request message and then click the right arrow icon [ Arrow Icon ], and then click OK. For this tutorial, select the Send schema (http://Microsoft.LobServices.Sap/2007/03/Idoc/3/ORDERS05//700/Send). The selected schema should now be listed under the Request Message Type box.
  7. Save the bridge configuration.

To connect the bridge to the relay endpoint

  1. In the SAPIntegration project, from the Toolbox, select the Connection component, and connect the XML One-Way Bridge component with the SAP relay endpoint you already added in Step 2: Expose a Relay Endpoint to Invoke Operations on ORDERS05 IDOC.
  2. Set the filter condition on the connection. The routing condition for this scenario is to route all messages to the LOB Target. To do so, select the connecting line, and from the Properties grid, click the ellipsis (…) against the Filter Condition property, and then select Match All. This ensures that all messages that come to the bridge are routed to the relay endpoint.
  3. Set the Route Action property on the connection. Before you set the route action, we must understand why it is required. The message sent from the EDI receive bridge to the relay endpoint must have the Action SOAP header set on it. This header defines what operation must be performed on the SAP system. The message that comes from the EDI receive pipeline does not have this header set. Hence, in this intermediary XML bridge, you set the route action on the message before it is sent the relay endpoint. As part of the route action, you add the required header on the message. Perform the following steps to set the route action.
    1. Find out the value that will be set for the Action SOAP header message. To do so, right-click the SAP relay endpoint from the Server explorer, and from the Properties grid, expand Operations, and copy the value. For this tutorial, the value is http://Microsoft.LobServices.Sap/2007/03/Idoc/3/ORDERS05//700/Send.

      Value for SOAP action

    2. Go back to the bridge configuration surface, select the connection between the bridge and the SAP relay, and from the Properties grid, click the ellipsis (…) against the Route Action property. In the Route Actions dialog box, click Add to open the Add Route Action dialog box. In the Add Route Action dialog box, do the following:
      • Under Property (Read From) section, select Expression and specify the value that you copied earlier.
        ImportantImportant
        Make sure you specify the value for Expression within single quotes.
      • Under Destination (Write-To) section, set the Type to SOAP and the Identifier to Action.

        Set Route Action

      • Click OK in the Add Route Action dialog box to add the route action. Click OK in the Route Actions dialog box and then click Save to save changes to an Enterprise Application Integration project.
  4. Save the project. The final bridge configuration resembles the following:

    Completed bridge configuration

To deploy the solution

  1. In Visual Studio, right click the SAPIntegration solution, and then click Build Solution.
  2. Once the build succeeds, right click the SAPIntegration solution, and then click Deploy Solution.
  3. In the deployment window, the Deployment Endpoint is a read-only property and the value is derived from the BizTalk Service URL/Namespace set in the message flow surface. However, you must provide the ACS Namespace for BizTalk Services, Issuer Name, and Shared Secret.
  4. Click Deploy. The Visual Studio Output pane displays the deployment progress and result. The URL where the bridge is deployed is also displayed in the Output pane. For this tutorial, the bridge is deployed at http://<mybiztalkservicename>.biztalk.windows.net/default/B2BConnector.

 

 

A Look At : Application Management and Governance in SharePoint 2013

Summary:Learn how to govern applications for SharePoint 2013 by creating a customization policy and understanding the app model, branding, and life-cycle management.

8322.sharepoint_2D00_2010_5F00_4855E582[1]

How will you manage the applications that are developed for your environment? What customizations do you allow in your applications, and what are your processes for managing those applications?

 

For effective and manageable applications, your organization should consider the following:

  • Customization policy   SharePoint 2013 includes customizable features and capabilities that span multiple product areas, such as business intelligence, forms, workflow, and content management. Customization can introduce risks to the stability, maintenance, and security of the environment. To support customization while controlling its scope, you should develop a customization policy.
  • Life-cycle management   Follow best practices to manage applications and keep your environments in sync.
  • Branding   If you are designing an information architecture and a set of sites to use across an organization, consider including branding in your governance plan. A formal set of branding policies helps ensure that sites consistently use enterprise imagery, fonts, themes, and other design elements.
  • Solutions or apps for SharePoint?   Decide whether a solution or an app for SharePoint would be the best choice for specific customizations.

Get developer guidance about customizing and branding SharePoint 2013 on MSDN: Build sites for SharePoint 2013.

Foundation icon This article is part of a set of articles about governance. The following articles describe other aspects of governance:

The What is governance? poster gives a summary of this content. Download the PDF version or Visio version, or Zoom into the model in full detail with Zoom.it from Microsoft.

Determine the types of customizations you want to allow and how to manage them. Your customization policy should include:

  • Service-level descriptions   What are the parameters for supporting and managing customizations in your environments? See Service-level agreements.
  • Guidelines for updating customizations   How do you manage changes to customizations, and how do you roll out those changes to your environments? Consider ways to manage source code, such as a source control system and standards for documenting the code.
  • Processes for analyzing   How do you understand whether a particular customization is working well in your environment, or how do you decide which ones to create, change, or retire?
  • Approved tools for customization   Consider development standards, such as coding best practices and the tools that you will to use across your organization. For example, you should decide whether to allow the use of SharePoint Designer 2013 and Design Manager, and specify which site elements can be customized and by whom.
  • Process for piloting and testing customizations   How do you test and deploy customizations? How many people should be in a pilot testing group? What are your standards for testing and validating customizations?
  • Who is responsible for ongoing support   Who will be responsible for supporting customizations in your environments—individual teams or a central group?
  • Guidelines for packaging and deploying customizations   Do you have individual packages for each, or do you include several in a feature or solution? Which customizations should be apps for SharePoint instead of solutions? How do you ensure that customizations in one environment do not affect the rest of your SharePoint implementation?
  • Specific policies regarding each potential type of customization   What types of customizations do you allow?

    For more information about kinds of customizations and their potential risks, see the Customizations table later in this article. For more information about processes for managing customizations, see the white paper SharePoint Products and Technologies customization policy. Most of this content still applies to SharePoint 2013.

  • Policies around using the App Catalog and SharePoint Store Which apps for SharePoint do you want to make available to your organization? Can users purchase apps directly? See Solutions or apps for SharePoint? later in this article for more information.

The highly customizable design of SharePoint products enables you to provide the look, behavior, or functionality that meets your business needs. Customizations can introduce risk to your environment, whether that risk is to the environment’s performance, availability, or supportability. Conversely, a “no customizations” policy severely restricts your organization’s ability to take advantage of the SharePoint platform.

All customizations are not the same. You must decide carefully which kinds of customizations to allow in your environment. You must ensure the customizations support the performance, availability, and supportability you want for your environment. Your governance policy should balance a level of acceptable risk against the business needs for your organization.

What is considered a customization? All of the following are considered kinds of customizations in SharePoint products:

  • Configuration   Using the SharePoint user interface to configure SharePoint products.
  • Branding   Changing logos, styles, colors, master pages and page layouts, and so on to create a custom look for your SharePoint sites. See more about branding.
  • Custom code   Using developer tools to add or change functionality in SharePoint products or to interact with other applications. Risk can vary depending on kind of functionality and level of trust (full trust solutions should be rarely used; consider apps for SharePoint first).
    TipTip:
    Sandboxed solutions are deprecated in this release, so they are not the best option for custom code in the long term

Some customizations have very little risk or impact on your environment. Others have the potential for much higher risk and impact. The following table provides examples of different kinds of customizations, the risk level associated with that kind of customization, and potential issues that you might face if you allow that kind of customization.

Customizations

Risk level Types of customizations and examples Considerations or impact
Unsupported/High Unsupported customizations such as direct changes to the database schema or modifying files on the file system.
  • Will not be supported through Microsoft Customer Support.
  • Will be unable to upgrade.

Do not use.

Moderate to high Creating applications that interact with or redirect actions in key pipelines, such as events, claims, and so on.
  • Potential for service outage or performance issues.
  • Might require rework at upgrade.
Moderate to low Using a custom Web Part outside a sandbox environment, creating custom actions such as adding a menu item, or creating a custom site provisioning process.
  • Short or long-term performance issues or page errors.
  • Might require rework at upgrade.
Low Using solutions in a sandbox environment. Short-term performance issues; you can avoid some performance issues by using resource throttling and quotas.
Very low to no risk Using apps for SharePoint or using functionality within the product or configurations, such as associating a workflow with a list or using an instance of a built in Web Part. Minor configuration or page errors that would have to be addressed. Apps can be uninstalled or updated.
NoteNote:
For more information about customizations and upgrade, see Considerations for specific customizations.

 

 

Also, when you think through the customizations to allow in your environment, consider carefully whether a particular customization is necessary. If it recreates functionality that is already available in the product (such as creating a Web Part that does the same thing as the Content Editor Web Part or the Content by Query Web Part), then that might be unnecessary work.

Consider first whether the standard functionality can do what you want, or check the SharePoint Store to see if there is an app for SharePoint available that does what you need.

Follow these best practices to manage applications based on SharePoint 2013 throughout their life cycle:

  • Use separate development, preproduction, and production environments, and keep these environments as synchronized as possible so that you can accurately test your customizations.
  • Test all customizations before releasing the first time and after any updates have been made before you release them to your production environment.
  • Use source code control and solution and feature versioning to track changes to code.

Development, test, and production environments

Consistent branding with a corporate style guide makes for more cohesive-looking sites and easier development. Store approved themes in the theme gallery for consistency so that users will know when they visit the site that they are in the right place.

SharePoint 2013 includes a new feature to use for branding, Design Manager. By using Design Manager, you can create a visual design for your website with whatever web design tool or HTML editor you prefer and then upload that design into SharePoint. Design Manager is the central hub and interface where you manage all aspects of a custom design.

Creating the visual design of a site often fits into a larger process, in which multiple people or organizations are involved. For a roadmap of the tasks from a larger perspective, see Design and branding in SharePoint 2013.

SharePoint 2013 has a new development model based on apps for SharePoint. Apps for SharePoint are self-contained pieces of functionality that extend the capabilities of a SharePoint website. An app may include SharePoint features such as lists, workflows, and site pages, but it can also use a remote web application and remote data in SharePoint. An app has few or no dependencies on any other software on the device or platform where it is installed, other than what is built into the platform. Apps have no custom code that runs on the SharePoint servers.

The guidance for whether to use apps for SharePoint or SharePoint solutions is to:

  • Design apps for end users

    Apps for SharePoint:

    • Are easy for users (tenant administrators and site owners) to discover and install.
    • Use safe SharePoint extensions.
    • Provide the flexibility to develop future upgrades.
    • Can integrate with cloud-based resources.
    • Are available for both SharePoint Online and on-premises SharePoint sites.
  • Use farm solutions for administrators

    SharePoint solutions:

    • Can access the server-side object-model APIs that are needed to extend SharePoint management, configuration, and security
    • Can extend Central Administration, Windows PowerShell cmdlets, timer jobs, custom backups, and so on.
    • Are installed by administrators.
    • Can have farm, web application, or site-collection scope.

Go to MSDN to get more information about the new development model, Apps for SharePoint compared with SharePoint solutions, and Deciding between apps for SharePoint and SharePoint solutions.

Set a policy for using apps for SharePoint in your organization. Can users purchase and download apps? How do you make your organization’s apps available? How do you tell if they’re being used?

  • SharePoint Store   Determine whether users can purchase or download apps from the SharePoint Store.
  • App Catalog   Make specific apps for SharePoint available to your users by adding them to the App Catalog.
  • App requests   Configure app requests to control which apps are purchased and how many licenses are available.
  • Monitor apps   Monitor specific apps in SharePoint Server 2013 to check for errors and to track usage.

In the market

BTSG NoS Addin beta version is now available on Visual Studio Gallery

Sandro Pereira BizTalk Blog

Exciting news for the BizTalk Community… specially the 200 persons that attended the BizTalk Summit 2014 London that already know the potential of BizTalk NoS addin!

Nino Crudele just public announce in is blog: BizTalk NoS Add-in Beta version has been officially released through Visual Studio Gallery that the BizTalk NoS Addin is now available for you to download it in install it on Visual Studio gallery at: BizTalk NoS Addin!!!!!

What is BTSG NoS Addin purpose?

The purpose of BTSG NoS addin is to help all BizTalk Developer, why not, all BizTalk Administrator too in a lot of different situations, by improving the developer experience and why not reduce the development time in new or existent BizTalk projects.

It will provide several functionalities like quick search inside artifact, fast register/unregister in GAC, find critical, internal or external dependencies… and many fore functionalities like JackHammering, which will compare your…

View original post 314 more words

SharePoint Samurai