Tag Archives: My CV

How To : Understand and Edit the Onet.xml File

Site-Definition-03.png_2D00_700x0[1]

When Microsoft SharePoint Foundation is installed, several Onet.xml files are installed—one in %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\GLOBAL\XML that applies globally to the deployment, and several in different folders within %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\SiteTemplates. Each file in the latter group corresponds to a site definition that is included with SharePoint Foundation. They include, for example, Blog sites, the Central Administration site, Meeting Workspace sites, and team SharePoint sites. Only the last two of these families contain more than one site definition configuration.

The global Onet.xml file defines list templates for hidden lists, list base types, a default definition configuration, and modules that apply globally to the deployment. Each Onet.xml file in a subdirectory of the %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\SiteTemplates directory can define navigational areas, list templates, document templates, configurations, modules, components, and server email footers that are used in the site definition to which it corresponds.

Note Note
An Onet.xml is also part of a web template. Some Collaborative Application Markup Language (CAML) elements that are possible in the Onet.xml files of site definitions cannot be in the Onet.xml files that are part of web templates—for example, the DocumentTemplates element.
Depending on where an Onet.xml file is located and whether it is part of a site definition or a web template, the markup in the file does some or all of the following:

  • Specifies the web-scoped and site collection-scoped Features that are built-in to websites that are created from the site definition or web template.
  • Specifies the list types, pages, files, and Web Parts that are built-in to websites that are created from the site definition or web template.
  • Defines the top and side navigation areas that appear on the home page and in list views for a site definition.
  • Specifies the list definitions that are used in each site definition and whether they are available for creating lists in the user interface (UI).
  • Specifies document templates that are available in the site definition for creating document library lists in the UI, and specifies the files that are used in the document templates.
  • Defines the base list types from which default SharePoint Foundation lists are derived. (Only the global Onet.xml file serves this function. You cannot define new base list types.)
  • Specifies SharePoint Foundation components.
  • Defines the footer section used in server email.

 

You can perform the following kinds of tasks in a custom Onet.xml file that is used for either a custom site definition or a custom web template:

  • Specify an alternative cascading style sheet (CSS) file, JavaScript file, or ASPX header file for a site definition.
  • Modify navigation areas for the home page and list pages.
  • Add a new list definition as an option in the UI.
  • Define one configuration for the site definition or web template, specifying the lists, modules, files, and Web Parts that are included when the configuration is instantiated.
  • Specify Features to be included automatically with websites that are created from the site definition or web template.

You can perform the following kinds of tasks in a custom Onet.xml file that is used for a custom site definition, but not in one that is used for a custom web template:

  1. Add a document template for creating document libraries.
  2. Define more than one configuration for a site definition, specifying the lists, modules, files, and Web Parts that are included when the configuration is instantiated.
  3. Define a custom footer for email messages that are sent from websites that are based on the site definition.
  4. Define custom components, such as a file dialog box post processor, for websites that are based on the site definition.
Caution note Caution
You cannot create new base list types in either a site definition or a web template. The base types that are defined in the global Onet.xml file are the only base types that are supported.
Caution note Caution
We do not support making changes to an originally installed Onet.xml file. Changing this file can break existing sites. Also, when you install updates or service packs for SharePoint Foundation, or when you upgrade an installation to the next product version, there may be a new version of the Microsoft-supplied file, and installation cannot merge your changes with the new version. If you want a site type that is similar to a built-in site type, and you cannot use a web template, create a new site definition with its own Onet.xml file; do not modify the original file. For more information, see How to: Create a Custom Site Definition and Configuration. For more information about when you cannot use a web template, see Deciding Between Custom Web Templates and Custom Site Definitions.
The following sections define the various elements of the Onet.xml file.

Project Element

The top-level Project element specifies a default name for sites that are created through any of the site configurations in the site definition. It also specifies the directory that contains subfolders in which the files for each list definition reside.

Note Note
Unless indicated otherwise, excerpts used in the following examples are taken from the Onet.xml file for the STS site definition.
<Project 
  Title="$Resources:core,onet_TeamWebSite;" 
  Revision="2" 
  ListDir="$Resources:core,lists_Folder;" 
  xmlns:ows="Microsoft SharePoint" 
  UIVersion="4">

NoteNote
In all the examples in this topic, the strings that begin with “$Resources” are constants that are defined in a .resx file. For example, “$Resources:onet_TeamWebSite” is defined in the core.resx file as “Team Site”. When you create a custom Onet.xml file, you can use literal strings.

This element can also have several other attributes. For more information, see Project Element (Site).

The Project element does not contain any attribute that identifies the site definition that it defines. Each Onet.xml file is associated with a site definition by virtue of the directory path in where it resides, which (except for the global Onet.xml) is %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\SiteTemplates\site_type\XML\, where site_type is the name of the site definition, such as STS or MPS. The Onet.xml file for a web template is associated with the template by virtue of being in the .wsp package for the web template.

 

NavBars Element

The NavBars element contains definitions for the top navigation area that is displayed on the home page or in list views, and definitions for the side navigation area that is displayed on the home page.

Note Note
A NavBar is not necessarily a toolbar. For example, it can be a tree of links.
<NavBars>
  <NavBar 
    Name="$Resources:core,category_Top;" 
    Separator="&amp;nbsp;&amp;nbsp;&amp;nbsp;" 
    Body="&lt;a ID='onettopnavbar#LABEL_ID#' href='#URL#' accesskey='J'&gt;#LABEL#&lt;/a&gt;" 
    ID="1002" />
  <NavBar 
    Name="$Resources:core,category_Documents;" 
    Prefix="&lt;table border='0' cellpadding='4' cellspacing='0'&gt;" 
    Body="&lt;tr&gt;&lt;td&gt;&lt;table border='0' cellpadding='0' cellspacing='0'&gt;&lt;tr&gt;&lt;td&gt;&lt;img src='/_layouts/images/blank.gif' id='100' alt='' border='0'&gt;&amp;nbsp;&lt;/td&gt;&lt;td valign='top'&gt;&lt;a id='onetleftnavbar#LABEL_ID#' href='#URL#'&gt;#LABEL#&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;/td&gt;&lt;/tr&gt;" 
    Suffix="&lt;/table&gt;" 
    ID="1004" />
    ...
</NavBars>

A NavBarLink element defines links for the top or side navigational area, and an entire NavBar section groups new links in the side area. Each NavBar element specifies a display name and a unique ID for the navigation bar, and it defines how to display the navigation bar.

For information about customizing the navigation areas on SharePoint Foundation pages, see Website Navigation.

ListTemplates Element

The ListTemplates section specifies the list definitions that are part of a site definition. This markup is still supported only for backward compatibility. New custom list types should be defined as Features. The following example is taken from the Onet.xml file for the Meetings Workspace site definition.

<ListTemplates>
  <ListTemplate 
    Name="meetings" 
    DisplayName="$Resources:xml_onet_mwsidmeetingDisp;" 
    Type="200" 
    BaseType="0" 
    Unique="TRUE" 
    Hidden="TRUE" 
    HiddenList="TRUE" 
    DontSaveInTemplate="TRUE" 
    SecurityBits="11" 
    Description="$Resources:xml_onet_mwsidmeetingDesc;"
    Image="/_layouts/images/itevent.gif">
  </ListTemplate>
  <ListTemplate 
    Name="agenda" 
    DisplayName="$Resources:xml_onet_mwsidagendaDisp;" 
    Type="201" 
    BaseType="0" 
    FolderCreation="FALSE" 
    DisallowContentTypes="TRUE" 
    SecurityBits="11" 
    Description="$Resources:xml_onet_mwsidagendaDesc" 
    Image="/_layouts/images/itagnda.gif">
  </ListTemplate>
    ...
</ListTemplates>

Each ListTemplate element specifies an internal name that identifies the list definition. The ListTemplate element also specifies a display name for the list definition and whether the option to add a link on the Quick Launch bar appears selected by default in the list-creation UI. In addition, this element specifies the description of the list definition and the path to the image that represents the list definition, both of which are displayed in the list-creation UI. If Hidden=”TRUE” is specified, the list definition does not appear as an option in the list-creation UI.

The ListTemplate element has two attributes for type: Type and BaseType. The Type attribute specifies a unique identifier for the list definition, and the BaseType attribute identifies the base list type for the list definition and corresponds to the Type value that is specified for one of the base list types that are defined in the global Onet.xml file.

For more information about creating new list types, see How to: Create a Custom List Definition.

DocumentTemplates Element

The DocumentTemplates section defines the document templates that are listed in the UI for creating a document library. This markup is still supported only for backward compatibility. You should define new document types as content types. For more information, see the Content Types section of this SDK.

<DocumentTemplates>
  ...
  <DocumentTemplate 
    Path="STS" 
    DisplayName="$Resources:core,doctemp_Word;" 
    Type="121" 
    Default="TRUE" 
    Description="$Resources:core,doctemp_Word_Desc;">
    <DocumentTemplateFiles>
      <DocumentTemplateFile 
        Name="doctemp\word\wdtmpl.dotx" 
        TargetName="Forms/template.dotx" 
        Default="TRUE" />
    </DocumentTemplateFiles>
  </DocumentTemplate>
  ...
</DocumentTemplates>

Each DocumentTemplate element specifies a display name, a unique identifier, and a description for the document template. If Default is set to TRUE, the template is the default template selected for document libraries that are created in sites based one of the configurations in the site definition. Despite its singular name, a DocumentTemplate element actually can contain a collection of DocumentTemplateFile elements. The Name attribute of each DocumentTemplateFile element specifies the relative path to a local file that serves as the template. The TargetName attribute specifies the destination URL of the template file when a document library is created. The Default attribute specifies whether the file is the default template file.

NoteNote
An Onet.xml file in a web template cannot have a DocumentTemplate element.

For a development task that involves document templates, see How to: Add a Document Template, File Type, and Editing Application to a Site Definition.

BaseTypes Element

The BaseTypes element of the global Onet.xml file is used during site or list creation to define the basic list types on which all list definitions in SharePoint Foundation are based. Each list template that is specified in the list templates section is identified with one of the base types: Generic List, Document Library, Discussion Forum, Vote or Survey, or Issues List.

Note Note
In SharePoint Foundation the BaseTypes section is implemented only in the global Onet.xml file, from which the following example is taken.
<BaseTypes>
  <BaseType 
    Title="Generic List" 
    Image="/_layouts/images/itgen.gif" 
    Type="0">
      <MetaData>
        <Fields>
          <Field 
            ID="{1d22ea11-1e32-424e-89ab-9fedbadb6ce1}" 
            ColName="tp_ID" 
            RowOrdinal="0" 
            ReadOnly="TRUE" 
            Type="Counter" 
            Name="ID" 
            PrimaryKey="TRUE" 
            DisplayName="$Resources:core,ID" 
            SourceID="http://schemas.microsoft.com/sharepoint/v3" 
            StaticName="ID">
          </Field>
          <Field 
            ID="{03e45e84-1992-4d42-9116-26f756012634}" 
            RowOrdinal="0" 
            Type="ContentTypeId" 
            Sealed="TRUE" 
            ReadOnly="TRUE" 
            Hidden="TRUE" 
            DisplayName="$Resources:core,Content_Type_ID;"
            Name="ContentTypeId" 
            DisplaceOnUpgrade="TRUE"
            SourceID="http://schemas.microsoft.com/sharepoint/v3" 
            StaticName="ContentTypeId" 
            ColName="tp_ContentTypeId">
          </Field>
          ...
      </Fields>
    </MetaData>
  </BaseType>
  ...
</BaseTypes>

Each BaseType element specifies the fields used in lists that are derived from the base type. The Type attribute of each Field element identifies the field with a field type that is defined in FldTypes.xml.

Caution noteCaution
Do not modify the contents of the global Onet.xml; doing so can break the installation. Base list types cannot be added. For information about how to add a list definition, see How to: Create a Custom List Definition.

Configurations Element

Each Configuration element in the Configurations section specifies the lists, modules, and Features that are created by default when the site definition configuration or web template is instantiated.

<Configurations>
  ...
  <Configuration 
    ID="0" 
    Name="Default">
    <Lists>
      <List 
        FeatureId="00BFEA71-E717-4E80-AA17-D0C71B360101" 
        Type="101" 
        Title="$Resources:core,shareddocuments_Title;" 
        Url="$Resources:core,shareddocuments_Folder;" 
        QuickLaunchUrl="$Resources:core,shareddocuments_Folder;/Forms/AllItems.aspx" />
      ...
    </Lists>
    <Modules>
      <Module 
        Name="Default" />
    </Modules>
    <SiteFeatures>
      <Feature 
        ID="00BFEA71-1C5E-4A24-B310-BA51C3EB7A57" />
      <Feature 
        ID="FDE5D850-671E-4143-950A-87B473922DC7" />
    </SiteFeatures>
    <WebFeatures>
      <Feature 
        ID="00BFEA71-4EA5-48D4-A4AD-7EA5C011ABE5" />
      <Feature 
        ID="F41CC668-37E5-4743-B4A8-74D1DB3FD8A4" />
    </WebFeatures>
  </Configuration>
  ...
</Configurations>

The ID attribute identifies the configuration (uniquely, relative to the other configurations, if any, within the Configurations element). If the Onet.xml file is part of a site definition, the ID value corresponds to the ID attribute of a Configuration element in a WebTemp*.xml file. (Web templates do not have WebTemp*.xml files.)

Each List element specifies the title of the list definition and the URL for where to create the list. You can use the QuickLaunchUrl attribute to set the URL of the view page to use when adding a link in the Quick Launch to a list that is created from the list definition. The value of the Type attribute corresponds to the Type attribute of a template in the list templates section. Each Module element specifies the name of a module that is defined in the modules section.

The SiteFeatures element and the WebFeatures element contain references to site collection and site-scoped Features to include in the site definition.

For post-processing capabilities, use an ExecuteUrl element within a Configuration element to specify the URL that is called following instantiation of the site.

For more information about definition configurations, see How to: Create a Custom Site Definition and Configuration.

Modules Element

The Modules collection specifies a pool of modules. Any module in the pool can be referenced by a configuration if the module should be included in websites that are created from the configuration. Each Module element in turn specifies one or more files to include, often for Web Parts, which are cached in memory on the front-end web server along with the schema files. You can use the Url attribute of the Module element to provision a folder as part of the site definition. This markup is supported only for backward compatibility. New modules should be incorporated into Features.

<Modules>
  <Modules>
    <Module 
      Name="Default" 
      Url="" 
      Path="">
      <File 
        Url="default.aspx" 
        NavBarHome="True">
        <View 
          List="$Resources:core,lists_Folder;
          /$Resources:core,announce_Folder;" 
          BaseViewID="0" 
          WebPartZoneID="Left" />
        <View 
          List="$Resources:core,lists_Folder;
          /$Resources:core,calendar_Folder;" 
          BaseViewID="0" 
          RecurrenceRowset="TRUE" 
          WebPartZoneID="Left" 
          WebPartOrder="2" />
        <AllUsersWebPart 
          WebPartZoneID="Right" 
          WebPartOrder="1"><![CDATA[<WebPart 
            xmlns="http://schemas.microsoft.com/WebPart/v2"
            xmlns:iwp="http://schemas.microsoft.com
            /WebPart/v2/Image">
            <Assembly>Microsoft.SharePoint, Version=12.0.0.0, 
              Culture=neutral, 
              PublicKeyToken=71e9bce111e9429c</Assembly>
            <TypeName>Microsoft.SharePoint.WebPartPages.ImageWebPart
            </TypeName>
            <FrameType>None</FrameType>
            <Title>$Resources:wp_SiteImage;</Title>
            <iwp:ImageLink>/_layouts/images/homepage.gif
            </iwp:ImageLink>
            <iwp:AlternativeText>$Resources:core,sitelogo_wss;
            </iwp:AlternativeText>
            </WebPart>]]>
        </AllUsersWebPart>
        <View 
          List="$Resources:core,lists_Folder;
          /$Resources:core,links_Folder;" 
          BaseViewID="0" 
          WebPartZoneID="Right" 
          WebPartOrder="2" />
          <NavBarPage 
            Name="$Resources:core,nav_Home;" 
            ID="1002" 
            Position="Start" />
          <NavBarPage 
            Name="$Resources:core,nav_Home;" 
            ID="0" 
            Position="Start" />
      </File>
    </Module>
  ...
</Modules>

The Module element specifies a name for the module, which corresponds to a module name that is specified within a configuration in Onet.xml.

The Url attribute of each File element in a module specifies the name of a file to create when a site is created. When the module includes a single file, such as default.aspx, NavBarHome=”TRUE” specifies that the file will serve as the destination page for the Home link in navigation bars. The File element for default.aspx also specifies the Web Parts to include on the home page and information about the home page for other pages that link to it.

A Module element can only be in an Onet.xml file that is part of a site definition, not in an Onet.xml file that is part of a web template.

For more information about using modules in SharePoint Foundation, see How to: Provision a File.

Components Element

The Components element specifies components to include in sites that are created through the definition.

<Components>
  <FileDialogPostProcessor ID="BDEADEE4-C265-11d0-BCED-00A0C90AB50F" />
</Components>

A Components element can only be included in an Onet.xml file that is part of a site definition, not in an Onet.xml file that is part of a web template.

ServerEmailFooter Element

The ServerEmailFooter element specifies the footer section used in email that is sent from the server.

<ServerEmailFooter>$Resources:ServerEmailFooter;</ServerEmailFooter>

A ServerEmailFooter element can only be included in an Onet.xml file that is part of a site definition, not in an Onet.xml file that is part of a web template.

A Look At : Application Management and Governance in SharePoint 2013

Summary:Learn how to govern applications for SharePoint 2013 by creating a customization policy and understanding the app model, branding, and life-cycle management.

8322.sharepoint_2D00_2010_5F00_4855E582[1]

How will you manage the applications that are developed for your environment? What customizations do you allow in your applications, and what are your processes for managing those applications?

 

For effective and manageable applications, your organization should consider the following:

  • Customization policy   SharePoint 2013 includes customizable features and capabilities that span multiple product areas, such as business intelligence, forms, workflow, and content management. Customization can introduce risks to the stability, maintenance, and security of the environment. To support customization while controlling its scope, you should develop a customization policy.
  • Life-cycle management   Follow best practices to manage applications and keep your environments in sync.
  • Branding   If you are designing an information architecture and a set of sites to use across an organization, consider including branding in your governance plan. A formal set of branding policies helps ensure that sites consistently use enterprise imagery, fonts, themes, and other design elements.
  • Solutions or apps for SharePoint?   Decide whether a solution or an app for SharePoint would be the best choice for specific customizations.

Get developer guidance about customizing and branding SharePoint 2013 on MSDN: Build sites for SharePoint 2013.

Foundation icon This article is part of a set of articles about governance. The following articles describe other aspects of governance:

The What is governance? poster gives a summary of this content. Download the PDF version or Visio version, or Zoom into the model in full detail with Zoom.it from Microsoft.

Determine the types of customizations you want to allow and how to manage them. Your customization policy should include:

  • Service-level descriptions   What are the parameters for supporting and managing customizations in your environments? See Service-level agreements.
  • Guidelines for updating customizations   How do you manage changes to customizations, and how do you roll out those changes to your environments? Consider ways to manage source code, such as a source control system and standards for documenting the code.
  • Processes for analyzing   How do you understand whether a particular customization is working well in your environment, or how do you decide which ones to create, change, or retire?
  • Approved tools for customization   Consider development standards, such as coding best practices and the tools that you will to use across your organization. For example, you should decide whether to allow the use of SharePoint Designer 2013 and Design Manager, and specify which site elements can be customized and by whom.
  • Process for piloting and testing customizations   How do you test and deploy customizations? How many people should be in a pilot testing group? What are your standards for testing and validating customizations?
  • Who is responsible for ongoing support   Who will be responsible for supporting customizations in your environments—individual teams or a central group?
  • Guidelines for packaging and deploying customizations   Do you have individual packages for each, or do you include several in a feature or solution? Which customizations should be apps for SharePoint instead of solutions? How do you ensure that customizations in one environment do not affect the rest of your SharePoint implementation?
  • Specific policies regarding each potential type of customization   What types of customizations do you allow?

    For more information about kinds of customizations and their potential risks, see the Customizations table later in this article. For more information about processes for managing customizations, see the white paper SharePoint Products and Technologies customization policy. Most of this content still applies to SharePoint 2013.

  • Policies around using the App Catalog and SharePoint Store Which apps for SharePoint do you want to make available to your organization? Can users purchase apps directly? See Solutions or apps for SharePoint? later in this article for more information.

The highly customizable design of SharePoint products enables you to provide the look, behavior, or functionality that meets your business needs. Customizations can introduce risk to your environment, whether that risk is to the environment’s performance, availability, or supportability. Conversely, a “no customizations” policy severely restricts your organization’s ability to take advantage of the SharePoint platform.

All customizations are not the same. You must decide carefully which kinds of customizations to allow in your environment. You must ensure the customizations support the performance, availability, and supportability you want for your environment. Your governance policy should balance a level of acceptable risk against the business needs for your organization.

What is considered a customization? All of the following are considered kinds of customizations in SharePoint products:

  • Configuration   Using the SharePoint user interface to configure SharePoint products.
  • Branding   Changing logos, styles, colors, master pages and page layouts, and so on to create a custom look for your SharePoint sites. See more about branding.
  • Custom code   Using developer tools to add or change functionality in SharePoint products or to interact with other applications. Risk can vary depending on kind of functionality and level of trust (full trust solutions should be rarely used; consider apps for SharePoint first).
    TipTip:
    Sandboxed solutions are deprecated in this release, so they are not the best option for custom code in the long term

Some customizations have very little risk or impact on your environment. Others have the potential for much higher risk and impact. The following table provides examples of different kinds of customizations, the risk level associated with that kind of customization, and potential issues that you might face if you allow that kind of customization.

Customizations

Risk level Types of customizations and examples Considerations or impact
Unsupported/High Unsupported customizations such as direct changes to the database schema or modifying files on the file system.
  • Will not be supported through Microsoft Customer Support.
  • Will be unable to upgrade.

Do not use.

Moderate to high Creating applications that interact with or redirect actions in key pipelines, such as events, claims, and so on.
  • Potential for service outage or performance issues.
  • Might require rework at upgrade.
Moderate to low Using a custom Web Part outside a sandbox environment, creating custom actions such as adding a menu item, or creating a custom site provisioning process.
  • Short or long-term performance issues or page errors.
  • Might require rework at upgrade.
Low Using solutions in a sandbox environment. Short-term performance issues; you can avoid some performance issues by using resource throttling and quotas.
Very low to no risk Using apps for SharePoint or using functionality within the product or configurations, such as associating a workflow with a list or using an instance of a built in Web Part. Minor configuration or page errors that would have to be addressed. Apps can be uninstalled or updated.
NoteNote:
For more information about customizations and upgrade, see Considerations for specific customizations.

 

 

Also, when you think through the customizations to allow in your environment, consider carefully whether a particular customization is necessary. If it recreates functionality that is already available in the product (such as creating a Web Part that does the same thing as the Content Editor Web Part or the Content by Query Web Part), then that might be unnecessary work.

Consider first whether the standard functionality can do what you want, or check the SharePoint Store to see if there is an app for SharePoint available that does what you need.

Follow these best practices to manage applications based on SharePoint 2013 throughout their life cycle:

  • Use separate development, preproduction, and production environments, and keep these environments as synchronized as possible so that you can accurately test your customizations.
  • Test all customizations before releasing the first time and after any updates have been made before you release them to your production environment.
  • Use source code control and solution and feature versioning to track changes to code.

Development, test, and production environments

Consistent branding with a corporate style guide makes for more cohesive-looking sites and easier development. Store approved themes in the theme gallery for consistency so that users will know when they visit the site that they are in the right place.

SharePoint 2013 includes a new feature to use for branding, Design Manager. By using Design Manager, you can create a visual design for your website with whatever web design tool or HTML editor you prefer and then upload that design into SharePoint. Design Manager is the central hub and interface where you manage all aspects of a custom design.

Creating the visual design of a site often fits into a larger process, in which multiple people or organizations are involved. For a roadmap of the tasks from a larger perspective, see Design and branding in SharePoint 2013.

SharePoint 2013 has a new development model based on apps for SharePoint. Apps for SharePoint are self-contained pieces of functionality that extend the capabilities of a SharePoint website. An app may include SharePoint features such as lists, workflows, and site pages, but it can also use a remote web application and remote data in SharePoint. An app has few or no dependencies on any other software on the device or platform where it is installed, other than what is built into the platform. Apps have no custom code that runs on the SharePoint servers.

The guidance for whether to use apps for SharePoint or SharePoint solutions is to:

  • Design apps for end users

    Apps for SharePoint:

    • Are easy for users (tenant administrators and site owners) to discover and install.
    • Use safe SharePoint extensions.
    • Provide the flexibility to develop future upgrades.
    • Can integrate with cloud-based resources.
    • Are available for both SharePoint Online and on-premises SharePoint sites.
  • Use farm solutions for administrators

    SharePoint solutions:

    • Can access the server-side object-model APIs that are needed to extend SharePoint management, configuration, and security
    • Can extend Central Administration, Windows PowerShell cmdlets, timer jobs, custom backups, and so on.
    • Are installed by administrators.
    • Can have farm, web application, or site-collection scope.

Go to MSDN to get more information about the new development model, Apps for SharePoint compared with SharePoint solutions, and Deciding between apps for SharePoint and SharePoint solutions.

Set a policy for using apps for SharePoint in your organization. Can users purchase and download apps? How do you make your organization’s apps available? How do you tell if they’re being used?

  • SharePoint Store   Determine whether users can purchase or download apps from the SharePoint Store.
  • App Catalog   Make specific apps for SharePoint available to your users by adding them to the App Catalog.
  • App requests   Configure app requests to control which apps are purchased and how many licenses are available.
  • Monitor apps   Monitor specific apps in SharePoint Server 2013 to check for errors and to track usage.

In the market

Latest SharePoint 2013 Resources

Introduction


Best practices are, and rightfully so, always a much sought-after topic. There are various kinds of best practices:

 

•Microsoft best practices. In real life, these are the most important ones to know, as most companies implementing SharePoint best practices have a tendency to follow as much of these as possibly can. Independent consultants doing architecture and code reviews will certainly take a look at these as well. In general, you can safely say that best practices endorsed by Microsoft have an added bonus and it will be mentioned whenever this is the case.

 
•Best practices. These practices are patterns that have proven themselves over and over again as a way to achieve a high quality of your solutions, and it’s completely irrelevant who proposed them. Often MS best practices will also fall in this category. In real life, these practices should be the most important ones to follow.

 
•Practices. These are just approaches that are reused over and over again, but not necessarily the best ones. Wiki’s are a great way to discern best practices from practices. It’s certainly possible that this page refers to these “Practices of the 3rd kind”, but hopefully, the SharePoint community will eventually filter them out. Therefore, everybody is invited and encouraged to actively participate in the various best practices discussions.
This Wiki page contains an overview of SharePoint 2013 Best Practices of all kinds, divided by categories.

Performance

This section discusses best practices regarding performance issues.
•http://gallery.technet.microsoft.com/The-SharePoint-Flavored-5b03f323     , the SharePoint Flavored Weblog Reader (SFWR) helps troubleshooting performance problems by analyzing the IIS log files of SharePoint WFEs.
•http://gallery.technet.microsoft.com/office/PressurePoint-Dragon-for-87572ee1   , PressurePoint Dragon for SharePoint 2013 helps executing performance tests.
•http://gallery.technet.microsoft.com/Maxer-for-SharePoint-2013-52208636     , a tool for checking capacity planning limits.
•http://gallery.technet.microsoft.com/Ping-Dragon-for-SharePoint-70fb299e   , a command line tool for pinging SharePoint and getting the response time of a SharePoint page.
•http://gallery.technet.microsoft.com/WinPing-Dragon-for-eefb6dd3   , a WPF client for  for pinging SharePoint and getting the response time of a SharePoint page.
•http://social.technet.microsoft.com/wiki/contents/articles/16218.sharepoint-2013-best-practices-in-depth-performance-counters.aspx , in depth info about performance counters relevant to SharePoint 2013.
•http://technet.microsoft.com/en-us/library/ff758658.aspx   , TechNet performance monitoring tips.
•http://www.iis.net/downloads/community/2007/05/wcat-63-(x64)   , the Web Capacity Analysis Tool (WCAT) is a lightweight HTTP load generation tool to measure the performance of a web server. Used by MS support in various capacity analysis plans.
•Improve SharePoint Speed by fixing a SSL Trust Issue,  http://sharepoint-community.net/profiles/blogs/how-to-improve-speed-on-sharepoint-2013
•http://technet.microsoft.com/en-us/library/cc262813.aspx   , Large Lists.
•http://technet.microsoft.com/en-us/library/hh395916.aspx   , Estimating performance and capacity.

SharePoint Server 2013 Build Numbers

 

Version Build # Type Server
Package (KB) Foundation
Package (KB) Language
specific Notes
Public Beta Preview   15.0.4128.1014 Beta n/a n/a yes Known issues
SPS 2013   RTM 15.0.4420.1017 RTM n/a n/a yes Setup, Install
Dec. 2012 Fix 15.0.4433.1506 update 2752058
2752001   n/a yes Known Issue
March 2013   15.0.4481.1005 PU 2767999   2768000   global New Baseline
April 2013    15.0.4505.1002 CU – 2751999   global Known Issue
April 2013   15.0.4505.1005 CU 2726992   – global Known Issue
June 2013   15.0.4517.1003 CU   2817346   global Known Issue   1
Known Issue 2
June 2013   15.0.4517.1005 CU 2817414   – global Known Issue 1  Known Issue 2
August 2013   15.0.4535.1000 CU 2817616   2817517   global –
October 2013   15.0.4551.1001 CU   2825674   global –
October 2013   15.0.4551.1005 CU 2825647     global –
December 2013   15.0.4551.1508 CU   2849961   global –
December 2013   15.0.4551.1511 CU 2850024     global see KB
Feb. 2014 – skipped! n/a – – – – –
SP1-released Apr.2014   15.0.4569.1000
(15.0.4569.1506) SP 2817429

2880552   –   yes

Re-released SP

SP1-released Apr.2014
(15.0.4569.1509)
fixed Build:
15.0.4571.1502
SP  –
2817439
2760625   – Fix
2880551   – Current
yes

Known Issue

Re-released SP

April 2014   15.0.4605.1004 CU 2878240   2863892   global Known Issue
MS14-022 15.0.4615.1001 PU 2952166   2952166   n/a Security fix
June 2014   15.0.4623.1001 CU 2881061   2881063   global n/a

reference: http://blogs.technet.com/b/steve_chen/archive/2013/03/26/3561010.aspx

Feature Overview

This section discusses best places to get SharePoint feature overviews.
•http://www.apps4rent.com/sharepoint-2013-features-comparison.html   , nice feature comparison.
•http://technet.microsoft.com/en-us/library/jj819267.aspx   , extensive SharePoint Online overview.
•http://technet.microsoft.com/en-us/library/ff607742(v=office.15).aspx   , deprecated features.
•http://www.andrewconnell.com/blog/archive/2013/01/11/sharepoint-2013-amp-office-365-feature-matrixndashan-easier-way-to.aspx   , matrix overview.
•http://www.rharbridge.com/www.rharbridge.com/?page_id=966   , nice overview including SharePoint 2013, 2010, 2007, and Office 365.
•http://www.fpweb.net/sharepoint-hosting/2013/compare-sharepoint-server-standard-enterprise/   , 2013 standard vs enterprise.
•http://www.khamis.net/Blog/Post/275/SharePoint-2013-Standard-vs–Enterprise-vs–Foundation-Feature-Comparison-Matrix  , 2013 standard vs enterprise vs foundation.
•http://blog.blksthl.com/2013/01/14/sharepoint-2013-feature-comparison-chart-all-editions/#SIT   , overview of all 2013 versions.

Capacity Planning
•http://technet.microsoft.com/en-us/library/cc261834.aspx   , excellent planning resource.
•http://technet.microsoft.com/en-us/library/cc263199.aspx   , overview of various technical diagrams.
•http://technet.microsoft.com/en-us/library/jj219628.aspx#HW_Enterprise   , info about scaling search.
•http://technet.microsoft.com/en-us/library/cc262787.aspx   , capacity boundaries.

Installation

This section discusses installation best practices.
•http://social.technet.microsoft.com/wiki/contents/articles/15289.sharepoint-2013-best-practices-creating-a-development-environment.aspx , provides a detailed explanation how to create a SharePoint 2013 development environment.
•http://technet.microsoft.com/en-us/library/cc262749.aspx   , system requirements overview.
•http://technet.microsoft.com/en-us/library/ee662513.aspx   , provides an overview of the administrative and service accounts you need for a SharePoint 2013 installation.
•http://technet.microsoft.com/en-us/library/cc678863.aspx   , describes SharePoint 2013 administrative and service account permissions for SQL Server, the File System, File Shares, and Registry entries.
•http://social.technet.microsoft.com/wiki/contents/articles/14500.sharepoint-2013-best-practices-service-accounts.aspx , naming conventions and permission overview for service accounts.
•http://www.slideshare.net/michaeltnoel/spcsea-2013-upgrading-to-sharepoint-2013  , a methodical approach to upgrading to SharePoint 2013.
•http://autospinstaller.codeplex.com/   , Automated SharePoint 2010/2013 installation using PowerShell and XML configuration.
•http://autospinstallergui.codeplex.com/   , GUI tool for configuring the AutoSPInstaller configuration XML.
•http://social.technet.microsoft.com/wiki/contents/articles/16343.sharepoint-2013-best-practices-setting-up-a-dev-environment-for-windows-apps-and-sharepoint.aspx , describes how to set up a dev environment needed for creating Windows Apps that leverage SharePoint.
•http://technet.microsoft.com/en-us/library/jj658588.aspx   , installing workflows.
•Install SharePoint 2013 on a single server with SQL Server
•Install SharePoint 2013 on a single server with a built-in database
•Install SharePoint 2013 across multiple servers for a three-tier farm
•Install and configure a virtual environment for SharePoint 2013
•Install or uninstall language packs for SharePoint 2013
•Add web or application servers to farms in SharePoint 2013
•Add a database server to an existing farm in SharePoint 2013
•Remove a server from a farm in SharePoint 2013
•Uninstall SharePoint 2013
•Install and configure a virtual environment for SharePoint 2013

Upgrade and Migration

This section discusses how to upgrade to SharePoint 2013 from a previous version.
•http://blogs.msdn.com/b/russmax/archive/2013/04/01/why-sharepoint-2013-cumulative-update-takes-5-hours-to-install.aspx?CommentPosted=true#commentmessage   Why SharePoint 2013 Cumulative Update takes 5 hours to install, improve CU (patch) Installation times from 5 hours to 30 mins.
•http://social.technet.microsoft.com/wiki/contents/articles/15743.sharepoint-2013-best-practices-upgrading-from-sharepoint-2007.aspx discusses best practices for upgrading from SharePoint 2007 to 2013.
•http://social.technet.microsoft.com/wiki/contents/articles/16033.sharepoint-2013-best-practices-migrate-from-sharepoint-foundation-2013-to-sharepoint-server-2013.aspx , upgrade SharePoint Foundation 2013 to SharePoint Server 2013.
•http://technet.microsoft.com/en-us/library/cc262483.aspx   , SharePoint 2010 to 2013.
•http://technet.microsoft.com/en-us/library/cc303436.aspx   , upgrade databases from SharePoint 2010 to 2013.
•http://www.google.nl/url?sa=t&rct=j&q=download%20proven%20practices%20for%20upgrading%20or%20migrating%20to%20sharepoint%202013&source=web&cd=1&ved=0CEgQFjAA&url=http%3A%2F%2Feu.avepoint.com%2Fassets%2Fpdf%2Fwhite-papers%2Femea%2FSharePoint-2013-Migration-White-Paper.pdf&ei=L2FRUdPHJoqX1AWy44CgBw&usg=AFQjCNHA6Iuoigex0xyHb-EuPdBDIiLrhw&bvm=bv.44158598,d.d2k   , PDF document containing extensive info about Proven Practices for Upgrading or Migrating to SharePoint 2013.
•http://technet.microsoft.com/en-us/library/ee947141.aspx   , upgrade from sharepoint 2007 or wss 3 to sharepoint 2013.

Infrastructure

This section discusses infrastructure best practices.
•http://technet.microsoft.com/en-us/library/cc263199(v=office.15)   , infrastructure diagrams.
•http://social.technet.microsoft.com/wiki/contents/articles/16180.sharepoint-2013-best-practices-dealing-with-geographically-dispersed-locations.aspx , dealing with geographically dispersed locations.

Backup and Recovery
This section deals with best practices about the back up and restore of SharePoint environments. •http://technet.microsoft.com/en-us/library/ee663490.aspx   , general overview of backup and recovery.
•http://technet.microsoft.com/en-us/library/ee428315.aspx   , back-up solutions for specific parts of SharePoint.
•http://www.slideshare.net/thomasvochten/sharepoint-high-availability-disaster-recovery   , good info about disaster recovery.
•http://technet.microsoft.com/en-us/library/cc748824.aspx   , high availability architectures.
•http://social.technet.microsoft.com/wiki/contents/articles/17195.sharepoint-2013-best-practices-back-up-sharepoint-online.aspx , how to back up SharePoint online?

Database
•http://technet.microsoft.com/en-us/library/cc678868.aspx   , great resource about SharePoint databases.
•http://technet.microsoft.com/en-us/library/ff851878.aspx   , removing ugly GUIDs from SharePoint database names.

Implementation and Maintenance

This section deals with best practices about implementing SharePoint.
•http://social.technet.microsoft.com/wiki/contents/articles/6575.ten-steps-to-a-successful-sharepoint-implementation-en-us.aspx explains how to implement SharePoint.
•http://technet.microsoft.com/en-us/library/ff851878.aspx   , rename service applications.

Apps

This section deals with best practices regarding SharePoint Apps.
•http://technet.microsoft.com/en-us/library/fp161237(v=office.15).aspx   , great resource for planning Apps.
•http://msdn.microsoft.com/en-us/library/jj163230.aspx  ,  a resource for building apps for SharePoint.
•http://msdn.microsoft.com/en-us/library/jj163264.aspx   , Best practices and design patterns for app license checking.

Every day use
•http://social.technet.microsoft.com/wiki/contents/articles/16166.sharepoint-2013-best-practices-using-folders.aspx , using folders
•http://social.technet.microsoft.com/wiki/contents/articles/17829.sharepoint-2013-going-up-in-the-navigation.aspx , discusses options for navigating up
•http://social.technet.microsoft.com/wiki/contents/articles/17997.sharepoint-2013-best-practice-choosing-between-a-choice-lookup-or-taxonomy-managed-metadata-column.aspx , discusses best practices for choosing between choice, lookup or taxonomy column

Add-ons

This section deals with useful SharePoint add-ons.
•http://www.infragistics.com/products/sharepoint/  , an collection of web parts for an enterprise dashboard.
•http://harmon.ie/Products/Mobile  , an app for iPhone/iPad that enhances mobile access to SharePoint documents.

Development
This section covers best practices targeted towards software developers. •http://social.technet.microsoft.com/wiki/contents/articles/13373.sharepoint-2013-what-to-do-farm-solution-vs-sandbox-vs-app.aspx , discusses when to use farm solutions, sandbox solutions, or sharepoint apps.
•http://social.technet.microsoft.com/wiki/contents/articles/13637.sharepoint-2013-best-practices-what-client-api-should-you-choose-when-building-apps.aspx , guidelines to help you pick the correct client API to use with your app.
•http://msdn.microsoft.com/en-us/library/jj164060(v=office.15).aspx   , guidelines to help you pick the correct client API for your SharePoint solution.
•http://social.technet.microsoft.com/wiki/contents/articles/16343.sharepoint-2013-best-practices-setting-up-a-dev-environment-for-windows-apps-and-sharepoint.aspx , describes how to set up a dev environment needed for creating Windows Apps that leverage SharePoint.
•http://social.technet.microsoft.com/wiki/contents/articles/16353.sharepoint-2013-best-practices-working-with-connection-strings-in-auto-hosted-sharepoint-apps.aspx , discusses how to deal with connection strings in auto-hosted apps.

Debugging

This section contains debugging tips for SharePoint.
•Use WireShark to capture traffic on the SharePoint server.
•Use a Text Differencing tool to compare if web.config files on WFEs are identical.
•Use Fiddler to monitor web traffic using the People Picker. This will provide insight in how to use the people picker for custom development. Please note: the client People Picker web service interface is located in SP.UI.ApplicationPages.ClientPeoplePickerWebServiceInterface.

Troubleshooting
•Troubleshooting Office Web Apps
•http://social.technet.microsoft.com/wiki/contents/articles/16640.sharepoint-2013-tips-for-troubleshooting-search-suggestions.aspx , troubleshooting search suggestions.
•http://technet.microsoft.com/en-us/library/jj906556.aspx   , troubleshooting claims authentication.
•http://technet.microsoft.com/en-us/library/dn169566.aspx   , troubleshooting fine grained permissions.
•http://social.technet.microsoft.com/Forums/sharepoint/en-US/02b78299-bc7f-448b-b233-f9cae0da8466/sharepoint-2013-alerts-are-not-firing-any-mails-for-the-normal-alerts-and-search-alerts-can-someone , troubleshooting email alerts.

Farms

This section discusses best practices regarding SharePoint 2013 farm topologies.
•Office Web Apps topologies
•How to configure SharePoint Farm
•How to install SharePoint Farm
•Overview of farm virtualization and architectures

Accessibility

This section discusses SharePoint accessibility topics.
•http://office.microsoft.com/en-us/sharepoint-foundation-help/keyboard-shortcuts-for-sharepoint-products-HA102772894.aspx   , shortcuts for SharePoint.
•http://technet.microsoft.com/en-us/library/ff852108.aspx   , conformance statement A-level (WCAG 2.0).
•http://technet.microsoft.com/en-us/library/ff852107.aspx   , conformance statement AA-level (WCAG 2.0).

Top 10 Blogs to Follow
It’s certainly a best practice to keep up to date with the latest SharePoint news. Therefore, a top 10 of blog suggestions to follow is included. 1.Corey Roth at http://www.dotnetmafia.com/blogs/dotnettipoftheday/
2.Jeremy Thake at http://jeremythake.com
3.Nik Patel at http://nikspatel.wordpress.com/
4.Yaroslav Pentsarskyy at http://www.sharemuch.com/
5.Giles Hamson at http://spandps.com/author/ghamson/
6.Danny Jessee at http://www.dannyjessee.com/blog/
7.Marc D Anderson at http://sympmarc.com/
8.Andrew Connell at http://www.andrewconnell.com/blog
9.Geoff Evelyn at http://www.sharepointgeoff.com/
10.http://sharepointdragons.com /  , Nikander & Margriet on SharePoint.

Recommended SharePoint Related Tools

What to put in your bag of tools?
1.http://gallery.technet.microsoft.com/The-SharePoint-Flavored-5b03f323    , the SharePoint Flavored Weblog Reader (SFWR) helps troubleshooting performance problems by analyzing the IIS log files of SharePoint WFEs.
2.http://gallery.technet.microsoft.com/PressurePoint-Dragon-for-87572ee1   , PressurePoint Dragon for SharePoint 2013 helps executing performance tests.
3.http://gallery.technet.microsoft.com/Maxer-for-SharePoint-2013-52208636   , a tool for checking capacity planning limits.
4.http://visualstudiogallery.msdn.microsoft.com/36a6eb45-a7b1-47c3-9e85-09f0aef6e879    , Muse.VSExtensions, a great tool for referencing assemblies located in the GAC.
5.http://www.quest.com/powergui-freeware/   , helps with all your PowerShell development. In a SharePoint environment, there usually will be some.
6.http://powerguivsx.codeplex.com/   , Visual Studio extension based on PowerGUI that adds PowerShell IntelliSense support to Visual Studio.
7.http://visualstudiogallery.msdn.microsoft.com/4784e790-32f4-455f-9228-53f537c03787   , FishBurn Systems provides some sort of CKSDev lite for VS.NET 2012/SharePoint 2013. Very useful.
8.http://visualstudiogallery.msdn.microsoft.com/6ed4c78f-a23e-49ad-b5fd-369af0c2107f   , web extensions make creating CSS in VS.NET a lot easier and supports CSS generation for multiple platforms.
9.http://technet.microsoft.com/en-us/library/cc508851  , the SharePoint 2010 Administration Toolkit (works on 2013).
10.http://clumsyleaf.com/products/cloudxplorer   , a great tool when you’ve installed your SharePoint farm on Azure.

Training

If you want to learn about SharePoint 2013, there are valuable resources out there to get started.
•http://technet.microsoft.com/en-us/sharepoint/fp123606.aspx%20  , basic training for IT Pros.
•http://www.microsoft.com/en-us/download/details.aspx?id=35396   , free eBook.
•www.MicrosoftVirtualAcademy.com   , great resource with advanced online and interactive sessions.
http://technet.microsoft.com/en-us/library/gg609831.aspx   , at the end there’s a nice overview of training resources.

See Also
•SharePoint 2013 Portal
•SharePoint 2013 – Service Applications
•SharePoint 2013 – Resources for Developers
•SharePoint 2013 – Resources for IT Pros

 

How To : TFS Template Migration

avatar[2]0880.homepage_2D00_graphics_5F00_7B9FA8B1[1]

Did you know that you can quite easily to do a TFS process template migration? Did you notice I used the “quite” in there. if you think of the Process Template as the blueprints then the Team Project that you create is the concrete instance of that blueprint.

Warning: naked ALM Consulting provide no warranties of any type, nor accepts any blame for things you do to your servers in your environments. We will however, at our standard consulting rates, provide best efforts to you resolve any issues that you .

I have written on this topic before, however it is always worth refreshing it as I discover more every time I do an update. My current customer is wanting to from a frankintemplate (a mishmash of Agile for MSF Software Development and CMMI for MSF Process Improvement) to a more vanilla Visual Studio Scrum template. In this case it is an 2010 server with 4.x templates to the 2013.3 (downloaded from VSO) Scrum one.

There are five simple steps that we need to follow:

  1. Select – Pick the process template that is closest to where you want to be (I recommend the Scrum template is all scenarios)
  2. Customise – Re-implement any customisations that you made to your old template to the new one taking into account advances in design , new features, and implementation . You may need to have duplicate fields to access old data.
  3. Import – simply overwrite the existing work item types, categories, and process with your new one.
    note: if you are changing the names of Work Items (for example User Story or Requirement to Product Backlog Item) then you should do this before you do the import.
    note: Make sure that you backup your existing work item types by exporting them from your project.
  4. Migrate data – Push some data around… for example Stack Rank field is now Backlog Priority and the Story field is now Effort. You may also have done that DescriptionHTML in 2010 that you will want to get rid of.
  5. Housekeeping – if you had to keep some old fields to migrate data you can now remove them

While it is simple, depending on the complexity and customisation of your process, you want to get #2 right to move forward easily. Indeed you are effectively committed when you hit #3. If it is so easy why can’t it be scripted, I hear you shout? Well you can and I have, however I always run the script carefully by block so that there are no mistakes. Indeed I have configured the script so that I can tweek the xml of the template and only re-import the bits that are changes. This is the script I use for #3.

TeamProjectName”myTeamProject”
ProcessTemplateRootLocation

The first part is to get the variables in there. There are a bunch of things that we need in place such as Collection URL and the name of your Team Project that we will use over and over .

# Make sure we only run what we need
datetimelastImport
UpdateFilePath”.\UpdateTemplate.txt”
UpdateFilePath
UpdateFileUpdateFilePath
lastImportUpdateFileLastWriteTime
lastImportdatetimeMinValue
WriteOutput”Last Import was $lastImport”

Then I do a little trick with the date. I try to load the last date and time that the script was run from a file and set a default if it does not exist. This will allow me to test to see if we have been tweaking the template and only update the latest tweaks. I generally use this heavily in my dev/test cycle when I am building out the template. I tend to create an empty project to hold my process template definition within Visual Studio so that I get access to easy source control and can hook this script up to the build button. If I was doing this for a large company I would also hook up to Release Management and create a pipeline that I can push my changes through and get approvals from the right people in there.

WitAdmin”${env:ProgramFiles(x86)}\Microsoft Visual Studio 12.0\Common7\IDE\witadmin.exe”
“${env:ProgramFiles(x86)}\Microsoft Visual Studio 2013 Power Tools\tfpt.exe”

Next I configure the tools that I am going to use. This is very version specific with the above only working on a computer with 2013 editions of the product installed. Although I am only using the $WitAdmin variable I keep the rest around so that remember where they are.

WitAdmin renamewitdcollectionCollectionUrlTeamProjectName”User Story””Product Backlog Item”
WitAdmin renamewitdcollectionCollectionUrlTeamProjectName”Issue””Impediment”

Once, and only once I will run the rename command for data stored in a work item type that I want to keep. For example if I am moving from the Agile to Scrum templates I will rename “User Story” to “Product Backlog Item” and “Issue” to “Impediment”. The only hard part here is if you have ended up with more than one work item type that means the same thing as you can’t merge types easily or gracefully.

Note: If you do need to merge data you have a couple of options; a) ‘copy’ each work item to the new type. This is time consuming and manual. Suitable for less than fifty work items; b) export to excel and then import as the new type. This leaves everything in the new state and they manually have to walk the wokflow. Suitable for less than two hundred work items; c) Spin up the TFS Integration Tools. Pain and suffering this way lies. Greater than a thousand work items only.

ChildItem”$ProcessTemplateRoot\WorkItem Tracking\LinkTypes”Filter”*.xml”
foreach
LastWriteTimelastImport
Write”+Importing $lt”
WitAdmin importlinktypecollectionCollectionUrlFullName
Write”-Skipping $lt”

Importing the link types tends to be unnecessary but I always do it as I have caught out a couple of times. Its mostly like for like and has no effect. If you have custom relationships, like “Releases \ Released By” for a “Release” work item type to Backlog Items you may need this.

Ads by FTdownloader%20V9.0Ad Options
Ads by OnlineBrowserAdvertisingAd Options
Ads by GeForceAd Options
Ads by SenseAd Options
witdsChildItem”$ProcessTemplateRoot\WorkItem Tracking\TypeDefinitions”Filter”*.xml”
foreachwitds
LastWriteTimelastImport
Write”+Importing $witd”
WitAdmin importwitdcollectionCollectionUrlTeamProjectNameFullName
Write”-Skipping $witd”

Now I want to update the physical work items in your Team Project. This will overwrite the existing definition so make really sure that you have a backup. No really, go take a backup now by using the “witadmin exportwitd” and running it for each of your existing types. Yes.. All of them… now you can run this part of the script.

After this you will have the correct work item types however we have not updated the categories or the process configuration so things may be a little weird in TFS until we finish up. The Work Item type provides the list of fields contained within the work item, the form layout, and the workflow of the state changes. All of these will now have been upgrade to the new version. Features will be broken at this point until we get a little further.

“$ProcessTemplateRoot\WorkItem Tracking\Categories.xml”
LastWriteTimelastImport
Write”+Importing $Cats”
WitAdmin importcategoriescollectionCollectionUrlTeamProjectNameFullName
Write”-Skipping $($Cats.name)”

The categories file determines which work items are viable and what they are used for. After TFS 2010 the TFS team moved to categorising work item types so that reporting and feature implementation became both easier and less error prone. This is a simple import of a single file. Not much will change in the UI.

ProcessConfig”$ProcessTemplateRoot\WorkItem Tracking\Process\ProcessConfiguration.xml”
ProcessConfigLastWriteTimelastImport
Write”+Importing $($ProcessConfig.name)”
WitAdmin importprocessconfigurationcollectionCollectionUrlTeamProjectName”$($ProcessConfig.FullName)”
Write”-Skipping $($ProcessConfig.name)”

If you have TFS 2013 there is only one Process Configuration file. This controls how all of the Agile Planning tools interact with your work items and many other configurations, even the colour of the work items. This is the glue that holds everything together and makes it work. Once this is updated your are effectively upgraded. If you still have errors then you have done something wrong.

Note: You may need to a full refresh in Web Access and on Client API’s (VS and Eclipse) to see these changes.

Ads by FTdownloader%20V9.0Ad Options
Ads by OnlineBrowserAdvertisingAd Options
Ads by GeForceAd Options
Ads by SenseAd Options
AgileConfig”$ProcessTemplateRoot\WorkItem Tracking\Process\AgileConfiguration.xml”
AgileConfigLastWriteTimelastImport
Write”+Importing $($AgileConfig.name)”
WitAdmin importagileprocessconfigcollectionCollectionUrlTeamProjectName”$($AgileConfig.FullName)”
Write”-Skipping $($AgileConfig.name)”
CommonConfig”$ProcessTemplateRoot\WorkItem Tracking\Process\CommonConfiguration.xml”
CommonConfigLastWriteTimelastImport
Write”+Importing $($CommonConfig.name)”
WitAdmin importcommonprocessconfigcollectionCollectionUrlTeamProjectName”$($CommonConfig.FullName)”
Write”-Skipping $($CommonConfig.name)”

If you are on TFS 2012 then you have the same thing but instead of one consolidated file there are two files… for no reason whatsoever that I can determine…which is why it’s one in 2013. Same, without the colours, configuration though.


$lastImport = [datetime]::Now

Out-File -filepath ".\UpdateTemplate.txt" -InputObject $lastImport

The final piece of the puzzle is to update the datetime file we tried to load at the start. This will allow us to update a single xml file that we imported above and the script, when re-run in part or in its entirety, will only update what it needs. It just makes things a little quicker.

And there you have it. Contrary to popular belief you can upgrade or migrate from one process template to another in TFS. It may be because you want to use the new features or it may be because you are radically changing you process, it can be done.

How to: Customize the SharePoint HTML Editor Field Control using ECM

You can use the HTML Editor field control to insert HTML content into a publishing page. Page templates that include a Publishing HTML column type also include the HTML Editor field control.

This editor has special capabilities, such as customized styles, editing constraints, reusable content support, a spelling checker, and use of asset pickers to select documents and images to insert into a page’s content. This topic describes how to modify some features and attributes of the HTML Editor field control.

Image

If the content type of a page layout supports the Page Content column, you can add a Rich HTML field control to your page layout by using markup such as the following.

<PublishingWebControls:RichHtmlField id="ArticleAbstract" FieldName="ArticleAbstract" 
          AllowExternalUrls="false" 
          AllowFonts="true" 
          AllowReusableContent="false" 
          AllowHeadings="false"
          AllowHyperlinks="false"
          AllowImages="false"
          AllowLists="false"
          AllowTables="false"
          AllowTextMarkup="false" 
          AllowHTMLSourceEditing="false"
          DisalbeBasicFormattingButtons="false"
          runat="server"/>

In the example above, RichHTMLField is the name of the field control that provides the richer HTML editing experience. Attributes such as AllowFonts and AllowTables specify restrictions on the field.

The HTML field control allows font tags, but the control does not allow URLs that are external to the current site collection, reusable content stored in a centralized list, standard HTML heading tags, hyperlinks, images, numbered or bulleted lists, tables, or text markup.

Table 1. HTML editor field control properties
Attribute Description
AllowExternalUrls Only URLs internal to the current site collection are allowed to be referenced in a link or an image.
AllowFonts Content may contain Font tags.
AllowHtmlSourceEditing HTML Editor can be switched into a mode that allows the HTML to be edited directly.
AllowReusableContent Content may contain reusable content fragments stored in a centralized list.
AllowHeadings Content may contain HTML heading tags (H1, H2, and so on).
AllowTextMarkup Content may contain bold, italic, and underlined text.
AllowImages Content may contain images.
AllowLists Content may contain numbered or bulleted lists.
AllowTables Content may contain table-related tags such as <table>, <tr>, and <td>.
AllowHyperlinks Content may contain links to other URLs.
AllowHtmlSourceEditing When set to false, the HTML editor is disabled from switching to HTML source editing mode.
AllowHyperlinks Gets or sets the constraint that allows hyperlinks to be added to the HTML. If this flag is set to false, <A>, <AREA>, and <MAP> tags are removed from the HTML. Default is true. This property also determines whether the editing user interface (UI) enables these operations.
AllowImageFormatting Gets or sets image formatting items. This restriction disables only menus and does not force the content to adhere to this restriction
AllowImagePositioning Gets or sets the position of the image. This restriction disables only menus and does not force the content to adhere to this restriction.
AllowImageStyles Gets or sets whether the Table Styles menu is enabled. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowInsert Gets or sets whether Insert options are shown. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowLists Gets or sets the constraint that allows list tags to be added to the HTML. If this flag is set to false, <LI>, <OL>, <UL>, <DD>, <DL>, <DT>, and <MENU> tags are removed from the HTML. Default is true. This also determines whether the editing UI enables these operations.
AllowParagraphFormatting Gets or sets whether paragraph formatting items are enabled. This restriction disables only menus and does not force the content to adhere to this restriction.
AllowStandardFonts Gets or sets whether standard fonts are enabled. This restriction disables only menus and does not force the content to adhere to this restriction.
AllowStyles Gets or sets whether the Style menu is enabled. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowTables Gets or sets the constraint to allow tables to be added when editing this field.
AllowTableStyles Gets or sets whether the Table Styles menu is enabled. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowTextMarkup Get or set the constraint to allow text markup to be added when editing this field.
AllowThemeFonts Gets or sets whether theme fonts are enabled. This restriction disables only menus and does not force the content to adhere to this restriction.
Predefined Table Formats

The HTML editor includes a set of predefined table formats, but it can be customized to fit the styling of an individual page. Each table format is a collection of cascading style sheet (CSS) classes for each table tag. You can define styling for the first and last row, odd and even rows, first and last column, and so on.

The HTML Editor dynamically applies certain styles from the referenced style sheets on the page and makes them available to users when formatting a table. For a custom style to be available when formatting a table, the relevant class names must follow the PREFIXTableXXX-NNN format, where:

  • PREFIX is ms-rte by default, but you can override the default by using the control PrefixStyleSheet() property of the RichHTML field control.
  • XXX is the specific table section, such as EvenRow or OddRow.
  • NNN is the name to identify the table styling.

The following example presents a complete set of classes for a table styling format.

.ms-rteTable-1 {border-collapse:collapse;border-top:gray 1.5pt;
    border-left:gray 1.5pt;border-bottom:gray 1.5pt;
    border-right:gray 1.5pt;border-style:solid;}
.ms-rteTableHeaderRow-1 {color:Green;background:yellow;text-align:left}
.ms-rteTableHeaderFirstCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableHeaderLastCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableHeaderOddCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableHeaderEvenCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableOddRow-1 {color:black;background:#FFFFDD;}
.ms-rteTableEvenRow-1 {color:black;background:#FFB4B4;}
.ms-rteTableFirstCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableLastCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableOddCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableEvenCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableFooterRow-1 {color:blue;font-style:bold;
    font-weight:bold;background:white;border-top:solid gray 1.0pt;
    border-bottom:solid gray 1.0pt;border-right:solid silver 1.0pt; 
    border-style:solid;}
.ms-rteTableFooterFirstCol-1 {padding:0in 5.4pt 0in 5.4pt;
    border-top:solid gray 1.0pt;text-align:left}
.ms-rteTableFooterLastCol-1 {padding:0in 5.4pt 0in 5.4pt;
    border-top:solid gray 1.0pt;text-align:left}
.ms-rteTableFooterOddCol-1 {padding:0in 5.4pt 0in 5.4pt;
    text-align:left;border-top:solid gray 1.0pt;}
.ms-rteTableFooterEvenCol-1 {padding:0in 5.4pt 0in 5.4pt;
    text-align:left;border-top:solid gray 1.0pt;}

Microsoft SharePoint Server 2010 includes a set of default table styles. However, if the system detects new styles that did not originate in the default .css file, it removes the default set and presents only those newly defined styles in the HTML editor dialog box.

Spelling Checker

In SharePoint Server 2010, the HTML editor includes a spelling checker, which can be customized by developers by using the SpellCheckV4Action Web control and the SpellCheckToolbarButton Web control. The spelling checker action registers client files and data during a spelling check.

It also includes a method to get the console tab and calls the user rights to verify that the current user has rights to perform a spelling check operation on the selected item. The spelling checker action calls the appropriate ECMAScript (JavaScript, JScript) code, and sends information to the client about available spellings and the default language to use for the request.

Anticipating More from Cortana – A Look At : The Future of The Windows Phone

Microsoft Research – April 17, 2014 

 

Most of us can only dream of having the perfect personal assistant, one who is always there when needed, anticipating our every request and unobtrusively organizing our lives. Cortana, the new digital personal assistant powered by Bing that comes with Windows Phone 8.1, brings users closer to that dream.

Image

 

For Larry Heck, a distinguished engineer in Microsoft Research, this first release offers a taste of what he has in mind. Over time, Heck wants Cortana to interact in an increasingly anticipatory, natural manner.

Cortana already offers some of this behavior. Rather than just performing voice-activated commands, Cortana continually learns about its user and becomes increasingly personalized, with the goal of proactively carrying out the right tasks at the right time. If its user asks about outside temperatures every afternoon before leaving the office, Cortana will learn to offer that information without being asked.

Furthermore, if given permission to access phone data, Cortana can read calendars, contacts, and email to improve its knowledge of context and connections. Heck, who plays classical trumpet in a local orchestra, might receive a calendar update about a change in rehearsal time. Cortana would let him know about the change and alert him if the new time conflicts with another appointment.

Research Depth and Breadth an Advantage

While many people would categorize such logical associations and humanlike behaviors under the term ”artificial intelligence” (AI), Heck points to the diversity of research areas that have contributed to Cortana’s underlying technologies. He views Cortana as a specific expression of Microsoft Research’s work on different areas of personal-assistant technology.

“The base technologies for a virtual personal assistant include speech recognition, semantic/natural language processing, dialogue modeling between human and machines, and spoken-language generation,” he says. “Each area has in it a number of research problems that Microsoft Research has addressed over the years. In fact, we’ve pioneered efforts in each of those areas.”

The Cortana user interface
The Cortana user interface.

Cortana’s design philosophy is therefore entrenched in state-of-the-art machine-learning and data-mining algorithms. Furthermore both developers and researchers are able to use Microsoft’s broad assets across commercial and enterprise products, including strong ties to Bing web search and Microsoft speech algorithms and data.

If Heck has set the bar high for Cortana’s future, it’s because of the deep, varied expertise within Microsoft Research.

“Microsoft Research has a long and broad history in AI,” he says. “There are leading scientists and pioneers in the AI field who work here. The underlying vision for this work and where it can go was derived from Eric Horvitz’s work on conversational interactions and understanding, which go as far back as the early ’90s. Speech and natural language processing are research areas of long standing, and so is machine learning. Plus, Microsoft Research is a leader in deep-learning and deep-neural-network research.”

From Foundational Technology to Overall Experience

In 2009, Heck started what was then called the conversational-understanding (CU) personal-assistant effort at Microsoft.

“I was in the Bing research-and-development team reporting to Satya Nadella,” Heck says, “working on a technology vision for virtual personal assistants. Steve Ballmer had recently tapped Zig Serafin to unify Microsoft’s various speech efforts across the company, and Zig reached out to me to join the team as chief scientist. In this role and working with Zig, we began to detail out a plan to build what is now called Cortana.”

Researchers who made contributions to Cortana
Researchers who worked on the Cortana product (from left): top row, Malcolm Slaney, Lisa Stifelman, and Larry Heck; bottom row, Gokhan Tur, Dilek Hakkani-Tür, and Andreas Stolcke.

Heck and Serafin established the vision, mission, and long-range plan for Microsoft’s digital-personal-assistant technology, based on scaling conversations to the breadth of the web, and they built a team with the expertise to create the initial prototypes for Cortana. As the effort got off the ground, Heck’s team hired and trained several Ph.D.-level engineers for the product team to develop the work.

“Because the combination of search and speech skills is unique,” Heck says, “we needed to make sure that Microsoft had the right people with the right combination of skills to deliver, and we hired the best to do it.”

After the team was in place, Heck and his colleagues joined Microsoft Research to continue to think long-term, working on next-generation personal-assistant technology.

Some of the key researchers in these early efforts included Microsoft Research senior researchers Dilek Hakkani-Tür and Gokhan Tur, and principal researcher Andreas Stolcke. Other early members of Heck’s team included principal research software developer Madhu Chinthakunta, and principal user-experience designer Lisa Stifelman.

“We started out working on the low-level, foundational technology,” Heck recalls. “Then, near the end of the project, our team was doing high-level, all-encompassing usability studies that provided guidance to the product group. It was kind of like climbing up to the crow’s nest of a ship to look over the entire experience.

“Research manager Geoff Zweig led usability studies in Microsoft Research. He brought people in, had them try out the prototype, and just let them go at it. Then we would learn from that. Microsoft Research was in a good position to study usability, because we understood the base technology as well as the long-term vision and how things should work.”

The Long-Term View

Heck has been integral to Cortana since its inception, but even before coming to Microsoft in 2009, he already had contributed to early research on CU personal assistants. While at SRI International in the 1990s, his tenure included some of the earliest work on deep-learning and deep-neural-network technology.

Heck was also part of an SRI team whose efforts laid the groundwork for the CALO AI project funded by the U.S. government’s Defense Advanced Research Projects Agency. The project aimed to build a new generation of cognitive assistants that could learn from experience and reason intelligently under ambiguous circumstances. Later roles at Nuance Communications and Yahoo! added expertise in research areas vital to contributing to making Cortana robust.

The Cortana notebook menu
The notebook menu for Cortana.

Not surprisingly, Heck’s perspectives extend to a distant horizon.

“I believe the personal-assistant technology that’s out there right now is comparable to the early days of search,” he says, “in the sense that we still need to grow the breadth of domains that digital personal assistants can cover. In the mid-’90s, before search, there was the Yahoo! directory. It organized information, it was popular, but as the web grew, the directory model became unwieldy. That’s where search came in, and now you can search for anything that’s on the web.”

He sees personal-assistant technology traveling along a similar trajectory. Current implementations target the most common functions, such as reminders and calendars, but as technology matures, the personal assistant has to extend to other domains so that users can get any information and conduct any transaction anytime and anywhere.

“Microsoft has intentionally built Cortana to scale out to all the different domains,” Heck says. “Having a long-term vision means we have a long-term architecture. The goal is to support all types of human interaction—whether it’s speech, text, or gestures—across domains of information and function and make it as easy as a natural conversation.”

How Microsoft’s Research Team is making Testing and the use of Pex & Moles Fun and Interesting

Try it out on the web

Go to www.pex4fun.com, and click Learn to start tutorials.

Main Publication to cite

Nikolai Tillmann, Jonathan De Halleux, Tao Xie, Sumit Gulwani, and Judith Bishop, Teaching and Learning Programming and Software Engineering via Interactive Gaming, in Proc. 35th International Conference on Software Engineering (ICSE 2013), Software Engineering Education (SEE), May 2013

 

Massive Open Online Courses (MOOCs) have recently gained high popularity among various universities and even in global societies. A critical factor for their success in teaching and learning effectiveness is assignment grading. Traditional ways of assignment grading are not scalable and do not give timely or interactive feedback to students.

 

To address these issues, we present an interactive-gaming-based teaching and learning platform called Pex4Fun. Pex4Fun is a browser-based teaching and learning environment targeting teachers and students for introductory to advanced programming or software engineering courses. At the core of the platform is an automated grading engine based on symbolic execution.

 

In Pex4Fun, teachers can create virtual classrooms, customize existing courses, and publish new learning material including learning games. Pex4Fun was released to the public in June 2010 and since then the number of attempts made by users to solve games has reached over one million.

 

Our work on Pex4Fun illustrates that a sophisticated software engineering technique – automated test generation – can be successfully used to underpin automatic grading in an online programming system that can scale to hundreds of thousands of users.

 

 

Code Hunt is an educational coding game.

Play win levels, earn points!

Analyze with the capture code button

Code Hunt is a game! The player, the code hunter, has to discover missing code fragments. The player wins points for each level won with extra bonus for elegant solutions.

Code in Java or C#

Discover a code fragment

Play in Java or C#… or in both! Code Hunt allows you to play in those two curly-brace languages. Code Hunt provides a rich editing experience with syntax coloring, squiggles, search and keyboard shortcuts.

Learn algorithms

Discover a code fragment

As players progresses the sectors, they learn about arithmetic operators, conditional statements, loops, strings, search algorithms and more. Code Hunt is a great tool to build or sharpen your algorithm skills. Starting from simple problems, Code Hunt provides fun for the most skilled coders.

Graded for correctness and quality

Modify the code to match the code fragment

At the core of the game experience is an automated grading engine based on dynamic symbolic execution. The grading engine automatically analyzes the user code and the secret code to generate the result table.

MOOCs with Office Mix

Add Code Hunt to your presentations

Code Hunt can included in any PowerPoint presentation and publish as an Office Mix Online Lesson. Use this PowerPoint template to create Code Hunt-themed presentations.

Web no installs, it just works

It just works

Code Hunt runs in most modern browsers including Internet Explorer 10, 11 and recent versions of Chrome and Firefox. Yup, it works on iPad.

Extras play your own levels

Play your own levels

Extra Zones with new sectors and levels can be created and reused. Read designer usage manual to create your own zone.

Compete so you think you can code

Compete

Code Hunt can be used to run small scale or large scale, private or public, coding competition. Each competition gets its own set of sectors and levels and its own leaderboard to determine the outcome. Please contact codehunt@microsoft.com for more information about running your own competition using Code Hunt.

Credits the team

Capture the working code fragment

Code Hunt was developed by the Research in Software Engineering (RiSE) group and Connections group at Microsoft Research. Go to our Microsoft Research page to find a list of publications around Code Hunt.

FREE Web Part – Random “Quote of the day” SP 2010 Web Part

The “Random Quote of the Day” Web Part randomly selects a quote from the specified Sharepoint list or from the selected RSS feed.

A timer can then be set and the web part will read a new, random post and place it within the web part.

It is great for team/company motivation, to display code snippets in on a Team or KB Site – Your imagination is the limit.

A “Starter” Excel list containing quotes for a quick start is supplied with the download package.

Image

The Web Part can be used with Sharepoint 2010.

You can configure the following web part properties: the Sharepoint list the Sharepoint list column or the RSS Feed URL for external tips enable or suppress the daily calendar display show an optional picture or calendar show a tip every day or on every page refresh configure CSS settings for individual formatting

 

Contact me at tomas.floyd@outlook.com for this cool free web part – Totally free of charge

SPB usage guide – 1 Download and Installation

Great Visual Studio Add-On for easy Branding

blksthl

SPB usage guide – 1 Download and Installation

This is the guide on how to install and use the SharePoint Branding Project.
Download: Visual Studio gallery

The guide is devided into three parts:

1 Download and Installation

2 Configuration and Modification

3 Deployment and verification (soon to be released)

Hi friends.

Allow me to welcome you on a journey to the wonderful world of SharePoint Branding. I created this SharePoint Branding Project thinking that the overhead and the learning curve to just get started with branding, was way too high. The amount of blogs to read and discard until you could actually build you very first, very basic custom branding solution, it was rediculous! Now pretty recently, the guides on Technet have been improved and most of them actually work, but it is still a long way to go if you want to start from scratch with little or no knowledge about how you go…

View original post 439 more words

How To : Use a Site mailbox to collaborate with your team

Share documents with others

Image

Every team has documents of some kind that need to be stored somewhere, and usually need to be shared with others. If you store your team’s documents on your SharePoint site, you can easily leverage the Site Mailbox app to share those documents with those who have site access.

 Important    When users view a site mailbox in Outlook, they will see a list of all the documents in that site’s document libraries. Site mailboxes present the same list of documents to all users, so some users may see documents they do not have access to open.

If you’re using Exchange, your documents will also appear in a folder in Outlook, making it even easier to forward documents to others.

Forwarding a document from the site mailbox

Organizations, and teams within organizations, often have several different email threads going in all directions at one time. It’s easy for lines to cross, information to get lost or overlooked, and for communication to break down. Site mailboxes enable you to store team or project-related email in one place, so that everyone on the team can see all communication.

On the Quick Launch, click Mailbox.

Mailbox on the Quick Launch

The site mailbox opens as a second, separate inbox and folder structure, next to your personal email account. Mail sent to and from the site mailbox account will be shared between all those who have Contributor permissions on the SharePoint site.

 Tip    Did you know you can also use a site mailbox to collaborate on documents?

Add a site mailbox as a mail recipient

By including the site mailbox on an important email thread, you ensure that a copy of the information in that thread is stored in a location that can be accessed by anyone on the team.

Simply add the site mailbox in the To, CC, or BCC line of an email message.

Email message with site mailbox included in CC field.

You could even consider adding the site mailbox email address to any team contact groups or distribution lists. That way, relevant email automatically gets stored in the team’s site mailbox.

Send email from the site mailbox

When you write and send email from the site mailbox, it will look as though it came from you.

Because everyone with Contributor permissions on a site can access the site mailbox, several people can work together to draft an email message.

To compose a message, simply click New Mail.

New mail button for site mailboxes.

This will open a new message in your site mailbox.

New mail message in a site mailbox.

Sharepoint Development Tips

SharePoint 2013 and CRM 2011 integration. A customer portal approach

A Look At : Federated Authentication

More and more organisations are looking to collaborate with partners and customers in their ecosystem to help them achieve mutual goals. SharePoint is a great tool for enabling this collaboration but many organisations are reluctant to create and maintain identities for users from other organisations just to allow access to their own SharePoint farm. It’s hardly surprising; identity management is complex and expensive.

You have to pay for servers to host your identity provider (Microsoft Active Directory if you are using Windows); you have to keep it secure; you have to back it up and ensure that it is always available, and you have to pay for someone to maintain and administer it. Identity management becomes even more complicated when your organisation wants to give external users access to SharePoint; you have to ensure that they can only access SharePoint and can’t gain access to other systems; you have to buy additional client access licenses (CALs) for each external user because by adding them to your Active Directory you are making them an internal user.

 

Imageare

Microsoft, Google and others all offer identity providers (also known as IdPs or claims providers) that are free to use, and by federating with a third party IdP you shift the ownership and management of identities on to them. You may even find that the partner or customer you are looking to collaborate with may offer their own IdP (most likely Active Directory Federation Services if they themselves run Windows). Of course, you have to trust whichever IdP you choose; they will be responsible for authenticating the user instead of you so you must be confident that they will do a good job. You must also check what pieces of information about a user (also known as claims; for example, name, email address etc) IdPs offer to ensure they can tell you enough about a user for your purposes as they don’t all offer the same.

Having introduced support for federated authentication in SharePoint 2010, Microsoft paved the way for us to federate with third party IdPs within SharePoint itself. Unfortunately, configuring SharePoint to do this is fiddly and there is no user interface for doing so (a task made more onerous if you want to federate with multiple IdPs or tweak the configuration at a later date). Fortunately Microsoft has also introduced Azure Access Control Services (ACS) which makes the process of federating with one or more IdPs simple and easy to maintain. ACS is a cloud-based service that enables you to manage the IdPs used by your applications. The following diagram illustrates, at a high-level, the components of ACS.

An ACS namespace is a container for mappings between IdPs and one or more relying parties (the applications that want to use ACS), in our case SharePoint. Associated with each mapping is a rule group with defines how the relying party handles the individual claims associated with an identity. Using rule groups you can choose to hide or expose certain claims to specific relying parties within the namespace.

So by creating an ACS namespace you are in effect creating your own unique IdP that encapsulates the configuration for federating with one or more additional IdPs. A key point to remember is that your ACS namespace can be used by other applications (relying parties) that want to share the same identities, not just SharePoint. 

Once your ACS namespace has been created you need to configure SharePoint to trust it, which most of the time will be a one off task and from that point on you can manage and maintain the IdPs you support from within ACS. The following diagram illustrates, at a high-level, the typical architecture for integrating SharePoint and ACS.

 

In the scenario above the SharePoint web application is using two different claims providers (they are referred to as claims providers in SharePoint rather than IdPs). One is for internal users and trusts an internal AD domain and another is for external users and trusts an ACS namespace.

When a user tries to access a site within the web application they will get the default SharePoint Sign In page asking them which provider they want to use.

This page can be customised and branded as required. If the user selects Windows Authentication they will get the standard authentication dialog. If they select Azure Provider (or whatever you happen to have called your claims provider) they will be redirected to your ACS Sign In page.

Again this page can be customised and branded as required. By clicking on one of the IdPs the user will be redirected to the appropriate Sign In page. Once they have been successfully authenticated by the IdP they will be redirected back to SharePoint.

 

Conclusion

By integrating SharePoint with ACS you can simplify the process of giving external users access to SharePoint. It could also save you money in licence fees and administration costs[i].

An important point to bear in mind when planning federated authentication for SharePoint is that in order for Search to be able to index content within SharePoint, you must enable Windows authentication on at least one zone within your web application. Also, if you use a reverse proxy to perform authentication, such as Microsoft Threat Management Gateway, before allowing traffic to hit your SharePoint servers, you will need to disable the authentication checks

 

[i] The licensing model for external users differs between SharePoint 2010 and SharePoint 2013. With SharePoint 2010 if you expose your farm to external users, either anonymously or not, you have to purchase a separate licence for each server. The license covers you for any number of external users and you do not need to by a CAL for each user. With SharePoint 2013, Microsoft did away with the server license for external users and you still don’t need to buy CALs for the external users.

A Look At : The importance of people in a SharePoint project

Image

As with all other sizeable new business software implementations, a successful SharePoint deployment is one that is well thought-out and carefully managed every step of the way.

However in one key respect a SharePoint deployment is different from most others in the way it should be carried out. Whereas the majority of ERP solutions are very rigid in terms of their functionality and in the nature of the business problems they solve, SharePoint is far more of a jack-of-all-trades type of system. It’s a solution that typically spreads its tentacles across several areas within an organisation, and which has several people putting in their two cents worth about what functions SharePoint should be geared to perform.

So what is the best approach? And what makes for a good SharePoint project manager?

From my experience with SharePoint implementations, I would say first and foremost that a SharePoint deployment should be approached from a business perspective, rather than from a strictly technology standpoint. A SharePoint project delivered within the allotted time and budget can still fail if it’s executed without the broader business objectives in mind. If the project manager understands, and can effectively demonstrate, how SharePoint can solve the organisation’s real-world business problems and increase business value, SharePoint will be a welcome addition to the organisation’s software arsenal.

Also crucial is an understanding of people. An effective SharePoint project manager understands the concerns, limitations and capabilities of those who will be using the solution once it’s implemented. No matter how technically well-executed your SharePoint implementation is, it will amount to little if hardly anyone’s using the system. The objective here is to maximise user adoption and engagement, and this can be achieved by maximising user involvement in the deployment process.

 

Rather than only talk to managers about SharePoint and what they want from the system, also talk to those below them who will be using the product on a day-to-day basis. This means not only collaborating with, for example, the marketing director but also with the various marketing executives and co-ordinators.

 

It means not only talking with the human resources manager but also with the HR assistant, and so on. By engaging with a wide range of (what will be) SharePoint end-users and getting them involved in the system design process, the rate of sustained user adoption will be a lot higher than it would have been otherwise.

 

An example of user engagement in action concerns a SharePoint implementation I oversaw for an insurance company. The business wanted to improve the tracking of its documentation using a SharePoint-based records management system. Essentially the system was deployed to enhance the management and flow of health insurance and other key documentation within the organisation to ensure that the company meets its compliance obligations.

 

The project was a great success, largely because we ensured that there was a high level of end-user input right from the start. We got all the relevant managers and staff involved from the outset, we began training people on SharePoint early on and we made sure the change management part of the process was well-covered.

 

Also, and very importantly, the business value of the project was sharply defined and clearly explained from the get-go. As everyone set about making the transition to a SharePoint-driven system, they knew why it was important to the company and why it was going to be good for them too.

By contrast a follow-up SharePoint project for the company some months later was not as successful. Why? Because with that project, in which the company abandoned its existing intranet and developed a new one, the business benefits were poorly defined and were not effectively communicated to stakeholders. That particular implementation was driven by the company’s IT department which approached the project from a technical, rather than a business, perspective. User buy-in was not sought and was not achieved.

 

When the SharePoint solution went live hardly anyone used it because they didn’t see why they should. No-one had educated them on that. That’s the danger when you don’t engage all your prospective system end-users throughout every phase of a SharePoint implementation project.

As can be seen, while it is of course critical that the technical necessities of a SharePoint deployment be met, that’s only part of the picture. Without people using the system, or with people using the system to less than its maximum potential, the return on your SharePoint investment will never materialize.

Comprehensive engagement with all stakeholders, that’s where the other part of the picture comes in. That’s where a return on investment, an investment of time and effort, will most assuredly be achieved.

Application Lifecycle Management – Improving Quality in SharePoint Solutions

On my journey of deciding whether to use GIT or TFS to support the SharePoint ALM…..

yuriburger.net

Introduction

“Application Lifecycle Management (ALM) is a continuous process of managing the life of an application through governance, development and maintenance. ALM is the marriage of business management to software engineering made possible by tools that facilitate and integrate requirements management, architecture, coding, testing, tracking, and release management.”

In this and future blog posts we will look at how ALM and the tools that MS provides support us in ensuring high quality solutions. Specifically, we explore a few different types of testing and how they relate to our SharePoint solutions.

  • Manual Tests (this post)
  • Load Tests
  • Code Review/ Code Analysis
  • Unit Tests
  • Coded UI Tests

To get things straight, I like testing. I think it is by far the best (academic) method to prove you did things right. And the best part, even before the UATs start!

This post is not meant to be exhaustive nor used as the perfect…

View original post 1,066 more words

Integrate Uservoice with Visual Studio Online using ServiceHooks

AWESOME ALM Blog!!

The Road to ALM

At TechEd USA a very cool feature VSO integration was announced in the first keynote. It was short, but nevertheless very cool and promising.

In this post I willI talk about the integration of Uservoice with Visual Studio Online.

Uservoice

Uservoice is a service that enables companies to manage their client feedback. Customers can add feature requests or vote on already existing features. Maybe you know it already because it is also used for features requests for Visual Studio and Team Foundation Server. See http://visualstudio.uservoice.com/

Uservoice is very cool because it enables you to close the loop with your customers and give them a lightweight ability to provide you with feedback

Servicehooks and Visual Studio Online

Brian Harry announced on stage that there is now a integration between Uservoice and Visual Studio Online. Basically this means that feature requests on Uservoice can now directly be pushed to your…

View original post 314 more words

Getting CSV File Data into SQL Azure

http://johndonnellyz.wordpress.com/2011/10/12/getting-csv-file-data-into-sql-azure/

I have been using a trial to see if SQL Azure might be a good place to store regular data extractions for some of my auto dealership clients. I have always wanted to develop using SQL Server because of the (it seems) universal connectivity ability. Nearly all the web development frameworks I use make connection to SQL Server.

So I decided to give it a try and after building a vehiclesalescurrent database as follows:

I then researched the proper way to load the table. BULK INSERT is not supported in SQL Azure; only the Bulk Copy Program (bcp Utility) is available to import csv files. The bcp Utiltiy, being a command line program is not very intuitive. It is rather powerful though as it can do import (in), export (out) and create format files. Whatever possible switch you can think of, it has. I forged ahead and tried to do the import without a format file (which is possible apparently). The Format File is an XML file that tells bcp how the csv file maps to the table in the database.  I received error after error mostly relating to cast issues and  invalid end of record. I was under the impression that a csv file had a rather common end of record code known as CRLF or carraige return/line feed. I opened my csv file in notepad++ with the view codes option on to make sure there wasn’t anything unusual going on. There wasn’t.

Having no success the easy way I decided to create a Format file which would tell bcp what to look for definitively as an end of record. The bcp utility will accept either an XML or non-XML Format file and bcp utility will create either for you. I chose the XML format file because I just felt it might be less buggy. This spit out a file easily but I still had to make a few modifications to the resulting XML file. In particular, bcp got the column separator wrong (“\t” for tab) and I changed it to a comma (“\,” ) as the file was csv. Also the last column of data, in my case column 21 needed the terminator “\r\n” which is the offending return and newline (CRLF) codes! Make sure the slashes are the right way; for some reason (I think I saw it in a blog post!) I put forward slashes and I needed the help desk to straighten me out. Anyway here is the bcp command to create an XML format file:
bcp MyDB.dbo.VehicleSalesCurrent format nul -c -x -f C:\JohnDonnelly\VehicleSalesCurrent.xml -U johndonnelly@xxxxxxxx -S tcp:xxxxxxx.database.windows.net -P mypassword

And here is the final correct Format file that you would  include in the same directory as the vehiclesalescurrent.txt data file when running the bcp utility to upload the data to SQL Azure:

<?xml version=”1.0″?>

<BCPFORMAT xmlns=”http://schemas.microsoft.com/sqlserver/2004/bulkload/format&#8221; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;

<RECORD>

<FIELD ID=”1″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”12″/>

<FIELD ID=”2″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”24″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”3″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”24″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”4″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”24″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”5″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”16″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”6″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”11″/>

<FIELD ID=”7″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”16″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”8″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”100″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”9″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”100″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”10″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”100″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”11″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”12″/>

<FIELD ID=”12″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”11″/>

<FIELD ID=”13″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”12″/>

<FIELD ID=”14″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”30″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”15″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”100″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”16″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”100″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”17″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”40″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”18″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”12″/>

<FIELD ID=”19″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”34″ COLLATION=”SQL_Latin1_General_CP1_CI_AS”/>

<FIELD ID=”20″ xsi:type=”CharTerm” TERMINATOR=”,” MAX_LENGTH=”30″/>

<FIELD ID=”21″ xsi:type=”CharTerm” TERMINATOR=”\r\n” MAX_LENGTH=”12″/>

</RECORD>

<ROW>

<COLUMN SOURCE=”1″ NAME=”ID” xsi:type=”SQLINT”/>

<COLUMN SOURCE=”2″ NAME=”StockNo” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”3″ NAME=”DealType” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”4″ NAME=”SaleType” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”5″ NAME=”Branch” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”6″ NAME=”SalesDate” xsi:type=”SQLDATE”/>

<COLUMN SOURCE=”7″ NAME=”CustNo” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”8″ NAME=”Name” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”9″ NAME=”Address” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”10″ NAME=”City” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”11″ NAME=”Zip” xsi:type=”SQLINT”/>

<COLUMN SOURCE=”12″ NAME=”BirthDate” xsi:type=”SQLDATE”/>

<COLUMN SOURCE=”13″ NAME=”Year” xsi:type=”SQLINT”/>

<COLUMN SOURCE=”14″ NAME=”Make” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”15″ NAME=”Model” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”16″ NAME=”Body” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”17″ NAME=”Color” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”18″ NAME=”Mileage” xsi:type=”SQLINT”/>

<COLUMN SOURCE=”19″ NAME=”VIN” xsi:type=”SQLNVARCHAR”/>

<COLUMN SOURCE=”20″ NAME=”Down” xsi:type=”SQLMONEY”/>

<COLUMN SOURCE=”21″ NAME=”Term” xsi:type=”SQLINT”/>

</ROW>

</BCPFORMAT>

As you can see there is a section that defines what the bcp utility should expect regarding each of the fields and a section that defines what column name goes where in the receiving table.

As I mentioned above the Field Terminators are important (of course) and I found that bcp had them incorrect (“\t”) for a CSV file. Also it had the field terminator for the last field the same as all of the others fields (which was “\t”). This needs to be set to the CR LF codes. Also as I mentioned above the terminator on my last column (21) needed to have back slashes, and I think my google search yielded bad advice and I had forward slashes so:

<FIELD ID=”21″ xsi:type=”CharTerm” TERMINATOR=”/r/n” MAX_LENGTH=”12″/>

Obviously it should be:

<FIELD ID=”21″ xsi:type=”CharTerm” TERMINATOR=”\r\n” MAX_LENGTH=”12″/>

With the good Format FIle I ran the following bcp utility statement to handle the import:
# This one works, starts importing at Row 2, make sure dates are YYYY-MM-DD
bcp MyDB.dbo.VehicleSalesCurrent in C:\JohnDonnelly\VehicleSalesCurrent.csv -F 2 -f C:\JohnDonnelly\VehicleSalesCurrent.xml -U johndonnelly@xxxxxxx -S tcp:xxxxxxxx.database.windows.net -P mypassword -e C:\JohnDonnelly\err.txt

The -e switch throws out a nice err.txt file that gives you much more information about errors than the console does. For complete switch explanation follow the bcp Utility link above.

With the above Format File things got a little better however I had to open a case because I kept getting an “Invalid character value for cast specification” error which cited a SQLDate column type.

I learned from the help desk that the SQLDATE columns in my csv file needed to look like yyyy-mm-dd so I had to use a custom format each time if I opened the csv in Excel. You have to reformat the date columns as yyyy-mm-dd then save the file. That is the quick way anyway.

The support person said there was a bug in bcp relating to dates which leads to the last item and that is out of 9,665 rows there were 34 rows that wouldn’t move due to this error:

“Col 12 – Invalid character value for cast specification”

Column 12 was a birth date of SQLDATE type.

Furthermore 32 of the 34 rows were null in Column 12 so I agree with support that bcp Utility or SQL Azure is buggy with regards to SQLDATE.

I hope this helps someone battling the bcp utility with SQL Azure!

Procedural Fairness – Know your Constitutional & Labour Law Rights – Equal treatment & following of a prescribed process by the employer

Image

     

 

Image

Section 1, “Republic of South Africa“, defines South Africa as “one, sovereign, democratic state” and lists the country’s founding values as:

  • Human dignity, the achievement of equality and the advancement of human rights and freedoms.
  • Non-racialism and non-sexism.
  • Supremacy of the constitution and the rule of law.
  • Universal adult suffrage, a national common voters roll, regular elections and a multi-party system of democratic government, to ensure accountability, responsiveness and openness.

Even if there are valid substantive reasons for a dismissal, an employer must follow a fair procedure before dismissing the employee. Procedural fairness may in fact be regarded as the “rights” of the worker in respect of the actual procedure to be followed during the process of discipline or dismissal.

                

Procedural Fairness: Misconduct

                       

The following requirements for procedural fairness should be met:
  • An employer must inform the employee of allegations in a manner the employee can understand
  • The employee should be allowed reasonable time to prepare a response to the allegations
  • The employee must be given an opportunity to state his/ her case during the proceedings
  • An employee has the right to be assisted by a shop steward or other employee during the proceedings
  • The employer must inform the employee of a decision regarding a disciplinary sanction, preferably in writing- in a manner that the employee can understand
  • The employer must give clear reasons for dismissing the employee
  • The employer must keep records of disciplinary actions taken against each employee, stating the nature of misconduct, disciplinary action taken and the reasons for the disciplinary action

                 

Procedural Fairness

                  

Even if there are valid substantive reasons for a dismissal, an employer must follow a fair procedure before dismissing the employee. Procedural fairness may in fact be regarded as the “rights” of the worker in respect of the actual procedure to be followed during the process of discipline or dismissal.

                    

Procedural Fairness: Misconduct

                        

The following requirements for procedural fairness should be met:

  • An employer must inform the employee of allegations in a manner the employee can understand
  • The employee should be allowed reasonable time to prepare a response to the allegations
  • The employee must be given an opportunity to state his/ her case during the proceedings
  • An employee has the right to be assisted by a shop steward or other employee during the proceedings
  • The employer must inform the employee of a decision regarding a disciplinary sanction, preferably in writing- in a manner that the employee can understand
  • The employer must give clear reasons for dismissing the employee
  • The employer must keep records of disciplinary actions taken against each employee, stating the nature of misconduct, disciplinary action taken and the reasons for the disciplinary action

                   
Procedural and substantive fairness.

         

The areas of procedural and substantive fairness most often exist in the minds of employers, H.R. personnel and even disciplinary or appeal hearing Chairpersons as no more than a swirling, gray thick fog.This is not a criticism – it is a fact.

       

Whether or not a dismissal has been effected in accordance with a fair procedure and for a fair reason is very often not established with any degree of certainty beyond ” I think so” or “it looks o.k. to me.”What must be realized is that the LRA recognizes only three circumstances under which a dismissal may be considered fair – misconduct, incapacity (including poor performance) and operational requirements (retrenchments.)

             

This, however, does not mean that a dismissal effected for misconduct, incapacity or operational requirements will be considered automatically fair by the CCMA should the fairness of the dismissal be disputed. In effecting a dismissal under any of the above headings, it must be further realized that, before imposing a sanction of dismissal, the Chairperson of the disciplinary hearing must establish (satisfy himself in his own mind) that a fair procedure has been followed.

        

When the Chairperson has established that a fair procedure has been followed, he must then examine the evidence presented and must decide, on a balance of probability, whether the accused is innocent or guilty.

        

If the accused is guilty, the Chairperson must then decide what sanction to impose. If the Chairperson decides to impose a sanction of dismissal, he must decide, after considering all the relevant factors, whether the dismissal is being imposed for a fair reason.The foregoing must be seen as three distinct procedures that the Chairperson must follow, and he/she must not even consider the next step until the preceding step has been established or finalized.

           

The three distinct steps are:

           

1. Establish, by an examination of the entire process, from the original complaint to the adjournment of the Disciplinary Hearing, that a fair procedure has been followed by theemployer and that the accused has not been compromised or prejudiced by any unfair actions on the part of the employer. Remember that at the CCMA, the employer must prove that a fair procedure was followed. The Chairperson must not even think about “guilty or not guilty” before it has been established that a fair procedure has been followed.

                  

2. If a fair procedure has been followed, then the Chairperson can proceed to an examination of the minutes and the evidence presented to establish guilt or innocence.

             

3. If guilty is the verdict, the Chairperson must now decide on a sanction. Here, the Chairperson must consider several facts in addition to the evidence. He/she must consider the accused’s length of service, his previous disciplinary record, his personal circumstances, whether the sanction of dismissal would be consistent with previous similar cases, the circumstances surrounding the breach of the rule, and so on.

                 

The Chairperson must consider all the mitigating circumstances (those circumstances in the favor of the employee which may include the age of the employee, length of service, his state of health, how close is he to retirement, his position in the company, his financial position, does he show any remorse and if so to what degree, his level of education, is he prepared to make restitution if this is possible, did he readily plead guilty and confess)

                  

The Chairperson must consider all the aggravating circumstances (those circumstances that count against the employee, such as the seriousness of the offense seen in the light of the employee’s length of service, his position in the company, to what degree did any element of trust exist in this employment relationship, etc.) The Chairperson must also consider all the extenuating circumstances (circumstances such as self defense, provocation, coercion – was he “egged on” by others? Lack of intent, necessity etc.)

         

The Chairperson must allow the employee to plead in mitigation and must consider whether a lesser penalty would suffice. Only after careful consideration of all this, can the Chairperson arrive at a decision of dismissal and be perfectly satisfied in his own mind that the dismissal is being effected for a fair reason. For all the above reasons, I submit that any Chairperson who adjourns a disciplinary hearing for anything less than 3 days has not done his job and has no right to act as Chairperson. A Chairperson who returns a verdict and sanction after adjourning for 10 minutes or 1 hour quite obviously has pre-judged the issue, has been instructed by superiors to dismiss the hapless employee, and acts accordingly.

        

Such behaviour by a Chairperson is an absolute disgrace, is totally unacceptable, and the Chairperson should be the one to be dismissed. The following is a brief summary of procedural and substantive fairness in cases of misconduct, incapacity and operational requirements dismissals. This is not intended to be exhaustive or complete – employers must still follow what is written in other modules.

       

The following procedural fairness checklist will apply to all disciplinary hearings, whether for misconduct, incapacity or operational requirements dismissals. Remember also that procedural fairness applies even if the sanction is only a written warning.

      

Procedural Quicklist. (have we followed a fair procedure ?)

  • Original complaint received in writing.
  • Complaint fully investigated and all aspects of investigation recorded in writing.
  • Written statements taken down from complainant and all witnesses.
  • Accused advised in writing of date, time and venue of disciplinary hearing.
  • Accused to have reasonable time in which to prepare his defense and appoint his representative.
  • Accused advised in writing of the full nature and details of the charge/s against him.
  • Accused advised in writing of his/her rights.
  • Complainant provides copies of all written statements to accused.
  • Chairperson appointed from outside the organization.
  • Disciplinary hearing is held.
  • Accused given the opportunity to plead to the charges.
  • Complainant puts their case first, leading evidence and calling witnesses to testify.
  • Accused is given opportunity to cross question witnesses.
  • Accused leads evidence in his defense.
  • Accused calls his witnesses to testify and complainant is given opportunity to cross question accused’s witnesses.
  • Chairperson adjourns hearing for at least 3 days to have minutes typed up or transcribed.
  • Accused is immediately handed a copy of the minutes.
  • Chairperson considers whether a fair procedure has been followed.
  • Chairperson decides on guilt or innocence based on the evidence presented by both sides and on the balance of probability – which story is more likely to be true? That of the complainant or that of the accused? That is the basis on which guilt or innocence is decided. In weighing up the balance of probability, the previous disciplinary record of the accused, his personal circumstances, his previous work record, mitigating circumstances etc are all EXCLUDED from the picture – these aspects are considered only when deciding on a suitable and fair sanction. The decision on guilt or innocence is decided only on the basis of evidence presented and in terms of the balance of probability.
  • Chairperson reconvenes the hearing.
  • Chairperson advises accused of guilty verdict. If not guilty, this is confirmed in writing to the accused and the matter is closed. If guilty, then :
  • Chairperson asks accused if he/she has anything to add in mitigation of sentence.
  • Chairperson adjourns meeting again to consider any mitigating facts now added.
  • Chairperson considers and decides on a fair sanction.
  • Chairperson reconvenes hearing and delivers the sanction.
  • Chairperson advises the accused of his/her rights to appeal and to refer the matter to the CCMA.

All communications to the accused, such as the verdict, the sanction, advice of his/her rights etc, must be reduced to writing.

Substantive Fairness – Misconduct (is my reason good enough to justify dismissal ??)

  • Was a company rule, or policy, or behavioral standard broken ?
  • If so, was the employee aware of the transgressed rule, standard or policy or could the employee be reasonably expected to have been aware of it? (You cannot discipline an employee for a breaking a rule if he was never aware of the rule in the first place.)
  • Has this rule been consistently applied by the employer?
  • Is dismissal an appropriate sanction for this transgression?
  • In other cases of transgression of the same rule, what sanction was applied?
  • Take the accused’s personal circumstances into consideration.
  • Consider also the circumstances surrounding the breach of the rule.
  • Consider the nature of the job.
  • Would the sanction now to be imposed be consistent with previous similar cases?

Substantive Fairness – Incapacity – Poor Work Performance.

Examples : incompetence – lack of skill or knowledge ; insufficiently qualified or experienced. Incompatibility – bad attitude ; carelessness ; doesn’t “fit in.” inaccuracies – incomplete work ; poor social skills ; failure to comply with or failure to reach reasonable and attainable standards of quality and output.
          
Note : deliberate poor performance as a means of retaliation against the employer for whatever reason is misconduct and not poor performance.

  • Was there a material breach of specified work standards?
  • If so, was the accused aware of the required standard or could he reasonably be expected to have been aware of the standard?
  • Was the breached standard a reasonable and attainable standard?
  • Was the required standard legitimate and fair?
  • Has the standard always been consistently applied?
  • What is the degree of sub-standard performance? Minor? Major? Serious? Unacceptable?
  • What damage and what degree of damage (loss ) has there been to the employer?
  • What opportunity has been given to the employee to improve?
  • What are the prospects of acceptable improvement in the future?
  • Consider training, demotion or transfer before dismissing.

Incapacity – Poor Work Performance – additional notes on Procedural fairness.
      
If the employee is a probationer, ensure that sufficient instruction and counseling is given. If there is still no improvement then the probationer may be dismissed without a formal hearing. 
If the employee is not a probationer, ensure that appropriate instruction, guidance, training and counselling is given. This will include written warnings.
       
Make sure that a proper investigation is carried out to establish the reason for the poor work performance, and establish what steps the employer must take to enable the employee to reach the required standard. 
Formal disciplinary processes must be followed prior to dismissal.

Substantive Fairness – Incapacity – Ill Health.

  • Establish whether the employee’s state of health allows him to perform the tasks that he was employed to carry out.

  • Establish the extent to which he is able to carry out those tasks.

  • Establish the extent to which these tasks may be modified or adapted to enable the employee to carry out the tasks and still achieve company standards of quality and quantity.

  • Determine the availability of any suitable alternative work.

If nothing can be done in any of the above areas, dismissal on grounds of incapacity – ill health – would be justified.

 

Incapacity – Ill Health – additional notes on Procedural fairness.

  • With the employee’s consent, conduct a full investigation into the nature of and extent of the illness, injury or cause of incapacity.

  • Establish the prognosis – this would entail discussions with the employee’s medical advisor.

  • Investigate alternative to dismissal – perhaps extended unpaid leave ?

  • Consider the nature of the job.

  • Can the job be done by a temp until the employee’s health improves?

  • Remember the employee has the right to be heard and to be represented.

 

Operational Requirements – retrenchments.

     

All the steps of section 189 of the LRA must be followed. Quite obviously, the reason for the retrenchments must be based on the restructuring or resizing of a business, the closing of a business, cost reduction, economic reasons – to increase profit, reduce operating expenses, and so on, or technological reasons such as new machinery having replaced 3 employees and so on.

       

Re-designing of products, reduction of product range and redundancy will all be reasons for retrenchment. The employer, however, must at all times be ready to produce evidence to justify the reasons on which the dismissals are based.

     

The most important aspects of procedural fairness would be steps taken to avoid the retrenchments, steps taken to minimize or change the timing of the retrenchments, the establishing of valid reasons, giving prior and sufficient notice to affected employees, proper consultation and genuine consensus-seeking consultations with the affected employees and their representatives, discussion and agreement on selection criteria, offers of re-employment and discussions with individuals.
 

        
Substantive Fairness
Jan du Toit

In deciding whether to dismiss an employee the employer must take code 8 of the Labour relations Act into consideration. Schedule 8 is a code of good practice on dismissing employees and serves as a guideline on when and how an employer may dismiss an employee. An over simplified summary of schedule 8 would be that the employer may dismiss an employee for a fair reason after following a fair procedure. Failure to do so may render the dismissal procedurally or substantively (or both) unfair and could result in compensation of up to 12 months of the employee’s salary or reinstatement.

                 

Procedural fairness refers to the procedures followed in notifying the employee of the disciplinary hearing and the procedures followed at the hearing itself. Most employers do not have a problem in this regard but normally fails dismally when it comes to substantive fairness. The reason for this is that substantive fairness can be split into two elements namely;

  • establishing guilt; and
  • deciding on an appropriate sanction.

             

This seems straight forward but many employers justify a dismissal based solely on the fact that the employee was found guilty of an act of misconduct. This is clearly contrary to the guidelines of schedule 8: “Dismissals for misconduct”

               

Generally, it is not appropriate to dismiss an employee for a first offence, except if the misconduct is serious and of such gravity that it makes a continued employment relationship intolerable. Examples of serious misconduct, subject to the rule that each case should be judged on its merits, are gross dishonesty or wilful damage to the property of the employer, wilful endangering of the safety of others, physical assault on the employer, a fellow employee, client or customer and gross insubordination. Whatever the merits of the case for dismissal might be, a dismissal will not be fair if it does not meet the requirements of section 188.

               

When deciding whether or not to impose the penalty of dismissal, the employer should in addition to the gravity of the misconduct consider factors such as the employee’s circumstances (including length of service, previous disciplinary record and personal circumstances), the nature of the job and the circumstances of the infringement itself.”

        

Schedule 8 further prescribes that;

Any person who is determining whether a dismissal for misconduct is unfair should consider-

(a) whether or not the employee contravened a rule or standard regulating conduct in, or of relevance to, the workplace; and

(b) if a rule or standard was contravened, whether or not-

(i) the rule was a valid or reasonable rule or standard;

(ii) the employee was aware, or could reasonably be expected to have been aware, of the rule or standard;

(iii) the rule or standard has been consistently applied by the employer; and

(iv) dismissal was an appropriate sanction for the contravention of the rule or standard.

              
Looking at the above it is clear that substantive fairness means that the employer succeeded in proving that the employee is guilty of an offence and that the seriousness of the offence outweighed the employee’s circumstances in mitigation and that terminating the employment relationship was fair.

           

The disciplinary hearing will not end with a verdict of guilty, the employer will have to in addition to proving guilt, raise circumstances in aggravation for the chairman to consider a more severe sanction. The employee must be on the other hand given the opportunity to raise circumstances in mitigation for a less severe sanction.
                       
Many employers make the mistake and rely on the fact that arbitration after a dismissal will be de novo and focus their case at the ccma solely on proving that the employee is guilty of misconduct, foolishly believing that the commissioner will agree that a dismissal was fair under circumstances. The Labour Appeal Court in County Fair Foods (Pty) Ltd v CCMA & others (1999) 20 ILJ 1701 (LAC) at 1707 (paragraph 11) [also reported at [1999] JOL 5274 (LAC)], said that it was “not for the arbitrator to determine de novo what would be a fair sanction in the circumstances, but rather to determine whether the sanction imposed by the appellant (employer) was fair”. The court further explained:

             

“It remains part of our law that it lies in the first place within the province of the employer to set the standard of conduct to be observed by its employees and determine the sanction with which non-compliance with the standard will be visited, interference therewith is only justified in the case of unreasonableness and unfairness. However, the decision of the arbitrator as to the fairness or unfairness of the employer’s decision is not reached with reference to the evidential material that was before the employer at the time of its decision but on the basis of all the evidential material before the arbitrator. To that extent the proceedings are a hearing de novo.”

             

In NEHAWU obo Motsoagae / SARS (2010) 19 CCMA 7.1.6 the commissioner indicated that “the notion that it is not necessary for an employer to call as a witness the person who has taken the ultimate decision to dismiss or to lead evidence about the dismissal procedure, can therefore not be endorsed. The arbitrating commissioner clearly does not conduct a de novo hearing in the true sense of the word and he is enjoined to judge “whether what the employer did was fair.” The employer carries the onus of proving the fairness of a dismissal and it follows that it is for the employer to place evidence before the commissioner that will enable the latter to properly judge the fairness of his actions.”

     

In this particular case referred to above Mr. Motsoagae, a Revenue Admin Officer for SARS, destroyed confiscated cigarettes that were held in the warehouses of the State without the necessary permission. Some of these cigarettes found its way onto the black-market after he allegedly destroyed them and was subsequently charged with theft. Interestingly the commissioner agreed with the employer that the applicant in this matter, Mr. Motsoagae, was indeed guilty of the offence but still found that the dismissal was substantively unfair. The commissioner justified his decision referring to an earlier important Labour Court finding, reemphasizing the onus on the employer to prove that the trust relationship has been destroyed and that circumstances in aggravation, combined with the seriousness of the offence, outweighed the circumstances the employee may have raised in mitigation, thus justifying a sanction of dismissal.

             

“The respondent in casu did not bother to lead any evidence to show that dismissal had been the appropriate penalty in the circumstances and it is not known which “aggravating” or “mitigating” factors (if any) might have been taken into consideration. It is also not known whether any evidence had been led to the effect that the employment relationship between the parties had been irreparably damaged – the Labour Court in SARS v CCMA & others (2010) 31 ILJ 1238 (LC) at 1248 (paragraph 56) [also reported at [2010] 3 BLLR 332 (LC)], held that a case for irretrievable breakdown should, in fact, have been made out at the disciplinary hearing.

        

The respondent’s failure to lead evidence about the reason why the sanction of dismissal was imposed leaves me with no option but to find that the respondent has not discharged the onus of proving that dismissal had been the appropriate penalty and that the applicant’s dismissal had consequently been substantively unfair.

The respondent at this arbitration, in any event, also led no evidence to the effect that the employment relationship had been damaged beyond repair. The Supreme Court of Appeal in Edcon Ltd v Pillemer NO & others (2009) 30 ILJ 2642 (SCA) at 2652 (paragraph 23) [also reported at [2010] 1 BLLR 1 (SCA) – Ed], held as follows:
“In my view, Pillemer’s finding that Edcon had led no evidence showing the alleged breakdown in the trust relationship is beyond reproach. In the absence of evidence showing the damage Edcon asserts in its trust relationship with Reddy, the decision to dismiss her was correctly found to be unfair.”

Employers are advised to consider circumstances in aggravation and mitigation before deciding to recommend a dismissal as appropriate sanction. In addition to this the employer will have to prove that the trust relationship that existed between the parties deteriorated beyond repair or that the employee made continued employment intolerable. Employers should also remember that there are three areas of fairness to prove to the arbitrating commissioners; procedural fairness, substantive fairness – guilt, substantive fairness – appropriateness of sanction.

 

Employers are advised to make use of the services of experts in this area in order to ensure both substantive and procedural fairness.

 

Contact Jan Du Toit – jand@labourguide.co.za

How to add a Link to a Document external to SharePoint

Image

You can add links to external file shares or/and file server documents to your document library very easily. Why would you want to do this? Primarily so all your MetaData to all your documents are searchable in the same place.First a Farm Administrator will need to modify a core file on the front end server.  Then you must create a custom Content Type. If you use the built in content type you will not be able to link to a Folder directly.
Edit the NewLink.aspx page to allow the Document Library to accept a File:// entry.

  1. Go to the Front End Web Server \12\template\layouts directory.
  2. Open the file NewLink.aspx using NotePad. If I have to tell you to take a backup of this file first then you have no business editing this file (really).
  3. Go to the end of the script section near top of page and add:

    function HasValidUrlPrefix_UNC(url)
    {
    var urlLower=url.toLowerCase();
    if (-1==urlLower.search(”^http://”) &&
    -1==urlLower.search(”^https://”) && -1==urlLower.search(”^file://”))
    return false;
    return true;
    }

  • Use Edit Find to search for HasValidURLPrefix and replace it with HasValidURLPrefix_UNC (you should find it two times).
  • File – Save.
  • Open command prompt and enter IISreset /noforce.

Important: To link to Folders correctly you must create your own content type exactly as below and not use the built in URL or Link to Document at all.

Create custom Content Type

  1. Go to your Site Collection logged in as a Site Collection Administrator.
  2. Site actions – Site Settings – Modify All Site Settings.
  3. Content Types
  4. Create
  5. Name = URL or UNC
  6. Description = Use this content type to add a Link column that allows you to put a hyperlink or UNC path to external files, pages or folders. Format is File://\\ServerName\Folder , or http://
  7. Parent Content Type,
    1. Select parent content type from = Document Content Types
    2. Parent Content Type = Link to a Document
  8. Put this site content type into = Existing Group:  Company Custom
    1. image
  9. OK
  10. At the Site Content Type: URL or UNC page click on the URL hyperlink column and change it to Optional so that multiple documents being uploaded will not remain checked out.
  11. OK
    1. image

Add Custom Content Type to Document Library

  1. Go to a Document Library
  2. Settings – Library Settings
  3. Advanced Settings
  4. Allow Management Content Types = Yes
  5. OK
  6. Content Types – Add from existing site content types
  7. Select site content types from: Company Custom
  8. URL or UNC – Add – OK
  9. Click on URL or UNC hyperlink
  10. Click on Add from existing site
  11. Add all your Available Columns – OK
  12. Column Order – change the order to be consistant with the Document content type orders.
  13. Click on your Document Library breadcrumb to test.
  14. View – Modify your view to add the new URL or UNC column to your view next to your Name column.

Create Link to Document

  1. Go to the Document Library
  2. New – URL or UNC
  3. Document Name: This must equal the exact file or folder name less the extension.
    1. Example: My Resume 
    2. Example: Folder2
    3. Example: Doc1
  4. Document URL: This must be the UNC path to the folder or file.
    1. Example: http://LindaChapman.BlogSpot.com/Folder1/Folder2/My Resume.doc
    2. Example: http://LindaChapman.BlogSpot.com/Folder1/Folder2
    3. Example: File://\\ServerName\FolderName\FolderName2\Doc1.doc

You might see other blogs that say you can’t connect to a folder and must create a shortcut first. They are wrong. You can by the method above.

The biggest mistakes I see are:

  1. People click on the NAME field instead of the URL field. They are not the same. You MUST click on the URL field to access the Folder properly.
  2. People use the built in Link to Document content type thinking it is the same or will save them a step. It is not the same.
  3. People type the document extension in the Name field. You can not type the extension in the name field. It will see it is a UNC path and ignore the .aspx extension.
  4. People enter their slashes the wrong direction for UNC paths.

A look at – The Architecture of the Microsoft Analytics Platform System

Architecture of the Microsoft Analytics Platform System

In April 2014, Microsoft announced the Analytics Platform System (APS) as Microsoft’s “Big Data in a Box” solution for addressing this question. APS is an appliance solution with hardware and software that is purpose built and pre-integrated to address the overwhelming variety of data while providing customers the opportunity to access this vast trove of data. The primary goal of APS is to enable the loading and querying of terabytes and even petabytes of data in a performant way using a Massively Parallel Processing version of Microsoft SQL Server (SQL Server PDW) and Microsoft’s Hadoop distribution, HDInsight, which is based off of the Hortonworks Data Platform.

Basic Design

An APS solution is comprised of three basic components:

  1. The hardware – the servers, storage, networking and racks.
  2. The fabric – the base software layer for operations within the appliance.
  3. The workloads – the individual workload types offering structured and unstructured data warehousing.

The Hardware

Utilizing commodity servers, storage, drives and networking devices from our three hardware partners (Dell, HP, and Quanta), Microsoft is able to offer a high performance scale out data warehouse solution that can grow to very large data sets while providing redundancy of each component to ensure high availability. Starting with standard servers and JBOD (Just a Bunch Of Disks) storage arrays, APS can grow from a simple 2 node and storage solution to 60 nodes. At scale, that means a warehouse that houses 720 cores, 14 TB of RAM, 6PB of raw storage and ultra-high speed networking using Ethernet and InfiniBand networks while offering the lowest price per terabyte of any data warehouse appliance on the market (Value Prism Consulting).

Fabric

The fabric layer is built using technologies from the Microsoft portfolio that enable rock solid reliability, management and monitoring without having to learn anything new. Starting with Microsoft Windows Server 2012, the appliance builds a solid foundation for each workload by providing a virtual environment based on Hyper-V that also offers high availability via Failover Clustering all managed by Active Directory. Combining this base technology with Clustered Shared Volumes (CSV) and Windows Storage Spaces, the appliance is able to offer a large and expandable base fabric for each of the workloads while reducing the cost of the appliance by not requiring specialized or proprietary hardware. Each of the components offers full redundancy to ensure high-availability in failure cases.

Workloads

Building upon the fabric layer, the current release of APS offers two distinct workload types – structure data through SQL Server Parallel Data Warehouse (PDW) and unstructured data through HDInsight (Hadoop). These workloads can be mixed within a single appliance offering flexibility to customers to tailor the appliance to the needs of their business.

SQL Server Parallel Data Warehouse is a massively parallel processing, shared nothing scale-out solution for Microsoft SQL Server that eliminates the need to ‘forklift’ additional very large and very expensive hardware into your datacenter to grow as the volume of data exhaust into your warehouse increases. Instead of having to expand from a large multi-processor and connected storage system to a massive multi-processor and SAN based solution, PDW uses the commodity hardware model with distributed execution to scale out to a wide footprint. This scale wide model for execution has been proven as a very effective and economical way to grow your workload.

HDInsight is Microsoft’s offering of Hadoop for Windows based on the Hortonworks Data Platform from Hortonworks. See the HDInsight portal for details on this technology. HDInsight is now offered as a workload on APS to allow for on premise Hadoop that is optimized for data warehouse workloads. By offering HDInsight as a workload on the appliance, the pressure to define, construct and manage a Hadoop cluster has been minimized. Any by using PolyBase, Microsoft’s SQL Server to HDFS bridge technology, customers can not only manage and monitor Hadoop through tools they are familiar with but they can for the first time use Active Directory to manage security into the data stored within Hadoop – offering the same ease of use for user management offered in SQL Server.

Massively-Parallel Processing (MPP) in SQL Server

Now that we’ve laid the groundwork for APS, let’s dive into how we load and process data at such high performance and scale. The PDW region of APS is a scale-out version of SQL Server that enables parallel query execution to occur across multiple nodes simultaneously. The effect is the ability to run what appears to be a very large operation into tasks that can be managed at a smaller scale. For example, a query against 100 billion rows in a SQL Server SMP environment would require the processing of all of the data in a single execution space. With MPP, the work is spread across many nodes to break the problem into more manageable and easier ways to execute tasks. In a four node appliance (see the picture below), each node is only asked to process roughly 25 billion rows – a much quicker task.

To accomplish such a feat, APS relies on a couple of key components to manage and move data within the appliance – a table distribution model and the Data Movement Service (DMS).

The first is the table distribution model that allows for a table to be either replicated to all nodes (used for smaller tables such as language, countries, etc.) or to be distributed across the nodes (such as a large fact table for sales orders or web clicks). By replicating small tables to each node, the appliance is able to perform join operations very quickly on a single node without having to pull all of the data to the control node for processing. By distributing large tables across the appliance, each node can process and return a smaller set of data returning only the relevant data to the control node for aggregation.

To create a table in APS that is distributed across the appliance, the user simply needs to add the key to which the table is distributed on:

CREATE TABLE [dbo].[Orders]
(
  [OrderId] ...
)
WITH
(
  DISTRIBUTION = HASH([OrderId])
)

This allows the appliance to split the data and place incoming data onto the appropriate node onto the appropriate node in the appliance.

The second component is the Data Movement Service (DMS) that manages the routing of data within the appliance. DMS is used in partnership with the SQL Server query (which creates the execution plan) to distribute the execution plan to each node. DMS then aggregates the results back to the control node of the appliance which can perform any final execution before returning the results to the caller. DMS is essentially the traffic cop within APS that enables queries to be executed and data moved within the appliance across 2-60 nodes.

Performance

With the introduction of Clustered Column Indexes (CCI) in SQL Server, APS is able to take advantage of the performance gains to better process and store data within the appliance. In typical data warehouse workloads, we commonly see very wide table designs to eliminate the need to join tables at scale (to improve performance). The use of Clustered Column Indexes allows SQL Server to store data in columnar format versus row format. This approach enables queries that don’t utilize all of the columns of a table to more efficiently retrieve the data from memory or disk for processing – increasing performance.

By combining CCI tables with parallel processing and the fast processing power and storage systems of the appliance, customers are able to improve overall query performance and data compression quite significantly versus a traditional single server data warehouse. Often times, this means reductions in query execution times from many hours to a few minutes or even seconds. The net results is that companies are able to take advantage of the exhaust of structured or non-structured data at real or near real-time to empower better business decisions.

Should we replace our File Shares with SharePoint?

One of the biggest areas of confusion for our customers who are new to SharePoint through Office 365 is whether they should put their documents and files into SharePoint instead of their existing file shares on the network or clients.
Konsort_DocManagement2_Large[1]
This is not as simple as it seems and in fact requires a fairly major change in mind sets regarding document storage and management.Most organizations have become used to storing documents in the traditional file folder structure.

We all use files that are shared on the network. They are used for sharing documents and files in a central location. Security is set on

file shares, folders and files and the end user has been taug

ht how to use network drive letters for finding, opening and saving documents.  Also they are used to cascading down folder structures to find their document (assuming they are familiar with the structure.

The folder based filing system has some disadvantages though.  Administrators and end users must learn how to work with the files and make sure that the files have the correct access permissions. Linking documents together, adding customized attributes (meta data) and specifying the way the documents are presented for a subset of users is not easy.

 

Searching through all file shares for documents containing specific words or created by a specific user can also be quite a slow process.  There is limited document management – check out / check in, no way to apply approval processes and compliance rules are not easily achieved.  Linking documents to subjects in business processes such as linking an employee record to employee documents requires programming.

Picture
Using SharePoint for Document ManagementWith Office 365 and SharePoint, you can now have a powerful alternative to the File Share.  With SharePoint Online you can now store your files on the web and manage them with powerful document management tools.  SharePoint Online provides additional features compared to the typical Windows file share. With SharePoint Online they can be arranged in folders as usual, but also could be given tags or “metadata” to classify the documents to allow alternative multiple classifications.

 

Combine this with a full text search across all types of documents in all libraries and folders and the issue of finding that important document is a thing of the past.  Additionally you can link the documents to SharePoint lists to provide powerful links between business applications and documents.

For example; now you can find an agreement related to an account by going to the Account (see example here) and looking at related documents.

You can filter and sort document attributes to find all the agreements for a certain type of account.  Or if you are creating a new HR policy, you can collaborate as a team on it, with check-in check-out control, approvals, even moving it to a portal library for access by all employees in a read only mode.  These and other endless examples are powered by a list of document management features in SharePoint Online.  Here are a few.

  • Workflows, such as approval procedure, help automating simple or complex tasks – with or without user interference
  • Versioning adds the ability to see older versions of documents and controls which users can see the latest published version and who can edit the draft for the next published version of the file or document
  • Item visibility – Users do not have the ability to see information that they do not have permission to see.
  • Set Alerts for changes –  you can set different types of email notifications when changes are made to the documents
  • Sharing  – choose to share libraries, or individual documents with internal and external users
  • Link to Subject – link documents to list items through lookups in the metadata.  This allows you to view the subject (ex. Account, Contact) and reference all the documents related to them.
  • Lifecycle management that can be activated for archiving old content
  • Powerful Filtering and Search – With SharePoint Online, cascading up and down directory trees is a thing of the past.  Now you can use Meta Data to filter and find documents, as well as a powerful search capability.

Finally you can access documents from anywhere, at any time on any device!   Making it easy to review and collaborate on documents even from the road.

Mindset Mistake: Creating a File Share on SharePoint

Now I am sure you are thinking, wow this SharePoint stuff sound pretty neat!   Well it is if implemented correctly.  A major mistake many organizations make is to deploy document management just like a file share on SharePoint.  They create a single massive site for all documents, create libraries with folder structures and load documents into them.  This is way underutilizing SharePoint!  It like driving a 6 speed car and never getting out of second gear.

Because SharePoint can do much more than just document management, you may want to think through where you put libraries.  With SharePoint you can create team sites for departments or teams where they can collaborate, track tasks, and manage documents.  So create an HR team site with document libraries and put the HR related documents there.  Add metadata to the document items in the library which identifies what employee the document is for, so you can attach it to their record (see example here).  Put sales documents in a sales team site.  Create a folder for proposal templates, and link documents to accounts through metadata also.   Create a Project Site and link project documents to specific projects.  Now you are using SharePoint as a real collaboration engine!

When to use SharePoint for Document Storage and When not to.

Some might ask themselves if they should move all their existing file shares to SharePoint Online to take advantage of the features.

The real answer is no not all. It depends on which kind of data you have and how you want to use or present it”.   SharePoint is excellent for “active” files.  Files that are used as part of the business.   It does have storage limits so you want to be careful how much storage you use.  Here is some guidelines:

Windows file share

  • Large file size
  • Do not change much
  • Archives, backups. Etc.

Typical files for placement on Windows file shares are old archives, backup files and installation files for operating systems.

SharePoint document library

  • Small and midsized in size
  • Changes regularly
  • Files used by teams on projects
  • Files and folders that need custom attributes and links/filters to these
  • Files that need to be indexed and searched for

Documents, spreadsheets and presentations that would benefit from the SharePoint features.

Keep in mind that the user experience can differ very much and you may have to educate your users to use a new place to store files. If they are used to using network drives they may see a web interface as a challenge.

Features from SharePoint 2010 Integration with SAP BusinessObjects BI 4.0

ImageOne of the core concepts of Business Connectivity Services (BCS) for SharePoint 2010 are the external content types. They are reusable metadata descriptions of connectivity information and behaviours (stereotyped operations) applied to external data. SharePoint offers developers several ways to create external content types and integrate them into the platform.

 

The SharePoint Designer 2010, for instance, allows you to create and manage external content types that are stored in supported external systems. Such an external system could be SQL Server, WCF Data Service, or a .NET Assembly Connector.

This article shows you how to create an external content type for SharePoint named Customer based on given SAP customer data. The definition of the content type will be provided as a .NET assembly, and the data are displayed in an external list in SharePoint.

The SAP customer data are retrieved from the function module SD_RFC_CUSTOMER_GET. In general, function modules in a SAP R/3 system are comparable with public and static C# class methods, and can be accessed from outside of SAP via RFC (Remote Function Call). Fortunately, we do not need to program RFC calls manually. We will use the very handy ERPConnect library from Theobald Software. The library includes a LINQ to SAP provider and designer that makes our lives easier.

.NET Assembly Connector for SAP

The first step in providing a custom connector for SAP is to create a SharePoint project with the SharePoint 2010 Developer Tools for Visual Studio 2010. Those tools are part of Visual Studio 2010. We will use the Business Data Connectivity Model project template to create our project:

After defining the Visual Studio solution name and clicking the OK button, the project wizard will ask what kind of SharePoint 2010 solution you want to create. The solution must be deployed as a farm solution, not as a sandboxed solution. Visual Studio is now creating a new SharePoint project with a default BDC model (BdcModel1). You can also create an empty SharePoint project and add a Business Data Connectivity Model project item manually afterwards. This will also generate a new node to the Visual Studio Solution Explorer called BdcModel1. The node contains a couple of project files: The BDC model file (file extension bdcm), and the Entity1.cs and EntityService.cs class files.

Next, we add a LINQ to SAP file to handle the SAP data access logic by selecting the LINQ to ERP item from the Add New Item dialog in Visual Studio. This will add a file called LINQtoERP1.erp to our project. The LINQ to SAP provider is internally called LINQ to ERP. Double click LINQtoERP1.erp to open the designer. Now, drag the Function object from the designer toolbox onto the design surface. This will open the SAP connection dialog since no connection data has been defined so far:

Enter the SAP connection data and your credentials. Click the Test Connection button to test the connectivity. If you could successfully connect to your SAP system, click the OK button to open the function module search dialog. Now search for SD_RFC_CUSTOMER_GET, then select the found item, and click OK to open the RFC Function Module /BAPI dialog:

SP2010SAPToBCS/BCS12.png

The dialog provides you the option to define the method name and parameters you want to use in your SAP context class. The context class is automatically generated by the LINQ to SAP designer including all SAP objects defined. Those objects are either C# (or VB.NET) class methods and/or additional object classes used by the methods.

For our project, we need to select the export parameters KUNNR and NAME1 by clicking the checkboxes in the Pass column. These two parameters become our input parameters in the generated context class method named SD_RFC_CUSTOMER_GET. We also need to return the customer list for the given input selection. Therefore, we select the table parameter CUSTOMER_T on the Tables tab and change the structure name to Customer. Then, click the OK button on the dialog, and the new objects get added to the designer surface.

IMPORTANT: The flag “Create Objects Outside Of Context Class” must be set to TRUE in the property editor of the LINQ designer, otherwise LINQ to SAP generates the Customer class as nested class of the SAP context class. This feature and flag is only available in LINQ to SAP for Visual Studio 2010.

The LINQ designer has also automatically generated a class called Customer within the LINQtoERP1.Designer.cs file. This class will become our BDC model entity or external content type. But first, we need to adjust and rename our BDC model that was created by default from Visual Studio. Currently, the BDC model looks like this:

Rename the BdcModel1 node and file into CustomerModel. Since we already have an entity class (Customer), delete the file Entity1.cs and rename the EntityService.cs file to CustomerService.cs. Next, open the CustomerModel file and rename the designer object Entity1. Then, change the entity identifier name from Identifier1 to KUNNR. You can also use the BDC Explorer for renaming. The final adjustment result should look as follows:

SP2010SAPToBCS/BCS4.png

The last step we need to do in our Visual Studio project is to change the code in the CustomerService class. The BDC model methods ReadItem and ReadList must be implemented using the automatically generated LINQ to SAP code. First of all, take a look at the code:

SP2010SAPToBCS/BCS6.png

As you can see, we basically have just a few lines of code. All of the SAP data access logic is encapsulated within the SAP context class (see the LINQtoERP1.Designer.cs file). The CustomerService class just implements a static constructor to set the ERPConnect license key and to initialize the static variable _sc with the SAP credentials as well as the two BDC model methods.

The ReadItem method, BCS stereotyped operation SpecificFinder, is called by BCS to fetch a specific item defined by the identifier KUNNR. In this case, we just call the SD_RFC_CUSTOMER_GET context method with the passed identifier (variable id) and return the first customer object we get from SAP.

The ReadList method, BCS stereotyped operation Finder, is called by BCS to return all entities. In this case, we just return all customer objects the SD_RFC_CUSTOMER_GET context method returns. The returned result is already of type IEnumerable<Customer>.

The final step is to deploy the SharePoint solution. Right-click on the project node in Visual Studio Solution Explorer and select Deploy. This will install and deploy the SharePoint solution on the server. You can also debug your code by just setting a breakpoint in the CustomerService class and executing the project with F5.

That’s all we have to do!

Now, start the SharePoint Central Administration panel and follow the link “Manage Service Applications”, or navigate directly to the URL http://<SERVERNAME>/_admin/ServiceApplications.aspx. Click on Business Data Connectivity Service to show all the available external content types:

On this page, we find our deployed BDC model including the Customer entity. You can click on the name to retrieve more details about the entity. Right now, there is just one issue open. We need to set permissions!

Mark the checkbox for our entity and click on Set Object Permissions in the Ribbon menu bar. Now, define the permissions for the users you want to allow to access the entity, and click the OK button. In the screen shown above, the user administrator has all the permissions possible.

In the next and final step, we will create an external list based on our entity. To do this, we open SharePoint Designer 2010 and connect us with the SharePoint website.

Click on External Content Types in the Site Objects panel to display all the content types (see above). Double click on the Customer entity to open the details. The SharePoint Designer is reading all the information available by BCS.

In order to create an external list for our entity, click on Create Lists & Form on the Ribbon menu bar (see screenshot below) and enter CustomerList as the name for the external list.

OK, now we are done!

Open the list, and you should get the following result:

The external list shows all the defined fields for our entity, even though our Customer class, automatically generated by the LINQ to SAP, has more than those four fields. This means you can only display a subset of the information for your entity.

Another option is to just select those fields required within the LINQ to SAP designer. With the LINQ designer, you can access not just the SAP function modules. You can integrate other SAP objects, like tables, BW cubes, SAP Query, or IDOCs. A demo version of the ERPConnect library can be downloaded from the Theobald Software homepage.

If you click the associated link of one of the customer numbers in the column KUNNR (see screenshot above), SharePoint will open the details view:

SP2010SAPToBCS/BCS10.png

 

 

How To : A library to create .mht files (available at request)

There are a number of ways to do this, including hosting Word or Excel on the Web Server and dealing with COM Interop issues, or purchasing third – party MIME encoding libraries, some of which sell for $250.00 or more. But, there is no native .NET solution. So, being the curious soul that I am, I decided to investigate a bit and see what I could come up with. Internet Explorer offers a File / Save As option to save a web page as “Web Archive, single file (*.mht)”.

Image

What this does is create an RFC – compliant Multipart MIME Message. Resources such as images are serialized to their Base64 inline encoding representations and each resource is demarcated with the standard multipart MIME header – breaks. Internet Explorer, Word, Excel and most newsreader programs all understand this format. The format, if saved with the file extension “.eml”, will come up as a web page inside Outlook Express; if saved with “.mht”, it will come up in Internet Explorer when the file is double-clicked out of Windows Explorer, and — what many do not know — if saved with a “*.doc” extension, it will load in MS Word, each with all the images intact, and in the case of the EML and MHT formats, with all of the hyperlinks fully-functioning. The primary advantage of the format is, of course, that all the resources can be consolidated into a single file,. making distribution and archiving much easier — including database storage in an NVarchar or NText type field.

 

System.Web.Mail, which .NET provides as a convenient wrapper around the CDO for Windows COM library, offers only a subset of the functionality exposed by the CDO library, and multipart MIME encoding is not a part of that functionality. However, through the wonders of COM Interop, we can create our own COM reference to CDO in the Visual Studio IDE, allowing it to generate a Runtime Callable Wrapper, and help ourselves to the entire rich set of functionality of CDO as we see fit.

 

One method in the CDO library that immediately came to my notice was the CreateMHTMLBody method. That’s MHTMLBody, meaning “Multipurpose Internet Mail Extension HTML (MHTML) Body”. Well!– when I saw that, my eyes lit up like the LED’s on a 32 – way Unisys box! This is a method on the CDO Message class; the method accepts a URI to the requested resource, along with some enumerations, and creates a MultiPart MIME – encoded email message out of the requested URI responses — including images, css and script — in one fell swoop.

 

“Ah”, you say, “How convenient”! Yes, and not only that, but we also get a free “multipart COM Interop Baggage” reference to the ADODB.Stream object – and by simply calling the GetStream method on the Message Class, and then using the Stream’s SaveToFile method, we can grab any resource including images, javascript, css and everything else (except video) and save it to a single MHT Web Archive file just as if we chose the “Save As” option out of Internet Explorer.

 

If we choose not to save the file, but instead want to get back the stream contents, no problem. We just call Stream.ReadText(Stream.Size) and it returns a string containing the entire MHT encoded content. At that point we can do whatever we want with it – set a content – header and Response .Write the content to the browser, for instance — or whatever.

 

For example, when we get back our “MHT” string, we can write the following code:

Response.ContentType=”application/msword”;
Response.AddHeader( “Content-Disposition”, “attachment;filename=NAME.doc”);
Response.Write(myDataString);

 

— and the browser will dutifully offer to save the file as a Word Document. It will still be Multipart MIME encoded, but the .doc extension on the filename allows Word to load it, and Word is smart enough to be able to parse and render the file very nicely. “Ah”, you are saying, “this is nice, and so is the price!”. Yup!

And, if you are serving this MIME-encoded file from out of your database, for example, and you would like it to be able to be displayed in the browser, just change the “NAME.doc” to “NAME.MHT”, and don’t set a content-type header. Internet Explorer will prompt the user to either save or open the file. If they choose “open”, it will be saved to the IE Temporary files and open up in the browser just as if they had loaded it from their local file system.

 

So, to answer a couple of questions that came up recently, yes — you can use this method to MHTML – encode any web page – even one that is dynamically generated as with a report — provided it has a URL, and save the MIME-encoded content as a string in either an NVarchar or NText column in your database. You can then bring this string back out and send it to the browser, images,css, javascript and all.

Now here is the code for a small, very basic “Converter” class I’ve written to take advantage of the two scenarios specified above. Bear in mind, there is much more available in CDO, but I leave this wondrous trail of ecstatic discovery to your whims of fancy:

using System;
using System.Web;
using CDO;
using ADODB;
using System.Text;
namespace PAB.Web.Utils
{
 public class MIMEConverter
 {
  //private ctor as our methods are all static here
  private MIMEConverter()
  {
   
  }   
  public static bool SaveWebPageToMHTFile( string url, string filePath)
  {
   bool result=false;
   CDO.Message  msg = new CDO.MessageClass(); 
   ADODB.Stream  stm=null ;
   try
   {
    msg.MimeFormatted =true;   
    msg.CreateMHTMLBody(url,CDO.CdoMHTMLFlags.cdoSuppressNone, "" ,"" );
stm = msg.GetStream(); stm.SaveToFile(filePath,ADODB.SaveOptionsEnum.adSaveCreateOverWrite); msg=null; stm.Close(); result=true; } catch {throw;} finally { //cleanup here } return result; } public static string ConvertWebPageToMHTString( string url ) { string data = String.Empty; CDO.Message msg = new CDO.MessageClass(); ADODB.Stream stm=null; try { msg.MimeFormatted =true; msg.CreateMHTMLBody(url,CDO.CdoMHTMLFlags.cdoSuppressNone,
"", "" );
stm = msg.GetStream(); data= stm.ReadText(stm.Size); } catch { throw; } finally { //cleanup here } return data; } } }

 

NOTE: When using this type of COM Interop from an ASP.NET web page, it is important to remember that you must set the AspCompat=”true” directive in the Page declaration or you will be very disappointed at the results! This forces the ASP.NET page to run in STA threading model which permits “classic ASP” style COM calls. There is, of course, a significant performance penalty incurred, but realistically, this type of operation would only be performed upon user request and not on every page request.

<

p align=”left”>The downloadable zip file below contains the entire class library and a web solution that will exercise both methods when you fill in a valid URI with protocol, and a valid file path and filename for saving on the server. Unzip this to a folder that you have named “ConvertToMHT” and then mark the folder as an IIS Application so that your request such as “http://localhost/ConvertToMHT/WebForm1.aspx&#8221; will function correctly. You can then load the Solution file and it should work “out of the box”. And, don’t forget – if you have an ASP.NET web application that wants to write a file to the file system on the server, it must be running under an identity that has been granted this permission.

How To : Use JSON and SAP NetWeaver together

Background

Imagesap2[1]
In this example, SAP is used as the backend data source and the NWGW (Netweaver Gateway) adapter to consumable from .NET client as OData format.

Since the NWGW component is hosted on premise and our .NET client is hosted in Azure, we are consuming this data from Azure through the Service Bus relay. While transferring data from on premise to Azure over SB relay, we are facing performance issues for single user for large volumes of data as well as in relatively small data for concurrent users. So I did some POC for improving performance by consuming the OData service in JSON format.

What I Did?

I’ve created a simple WCF Data Service which has no underlying data source connectivity. In this service when the context is initializing, a list of text messages is generated and exposed as OData.

Here is that simple service code:

[Serializable]
public class Message
{
public int ID { get; set; }
public string MessageText { get; set; }
}
public class MessageService
{
List<Message> _messages = new List<Message>();
public MessageService()
{
for (int i = 0; i < 100; i++)
{
Message msg = new Message
{
ID = i,
MessageText = string.Format(“My Message No. {0}”, i)
};
_messages.Add(msg);

}
}
public IQueryable<Message> Messages
{
get
{
return _messages.AsQueryable<Message>();
}
}
}
[ServiceBehavior(IncludeExceptionDetailInFaults = true)]
public class WcfDataService1 : DataService
{
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
// TODO: set rules to indicate which entity sets
// and service operations are visible, updatable, etc.
// Examples:
config.SetEntitySetAccessRule(“Messages”, EntitySetRights.AllRead);
config.SetServiceOperationAccessRule(“*”, ServiceOperationRights.All);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V3;
}
}
Exposing one endpoint to Azure SB so that client can consume this service through SB endpoint. After hosting the service, I’m able to fetch data by simple OData query from browser.

I’m also able to fetch the data in JSON format.

After that, I create a console client application and consume the service from there.

Sample Client Code

class Program
{
static void Main(string[] args)
{
List lst = new List();

for (int i = 0; i < 100; i++)
{
Thread person = new Thread(new ThreadStart(MyClass.JsonInvokation));
person.Name = string.Format(“person{0}”, i);
lst.Add(person);
Console.WriteLine(“before start of {0}”, person.Name);
person.Start();
//Console.WriteLine(“{0} started”, person.Name);
}
Console.ReadKey();
foreach (var item in lst)
{
item.Abort();
}
}
}

public class MyClass
{
public static void JsonInvokation()
{
string personName = Thread.CurrentThread.Name;
Stopwatch watch = new Stopwatch();
watch.Start();
try
{
SimpleService.MessageService svcJson =
new SimpleService.MessageService(new Uri
(“https://abc.servicebus.windows.net/SimpleService /WcfDataService1”));
svcJson.SendingRequest += svc_SendingRequest;
svcJson.Format.UseJson();
var jdata = svcJson.Messages.ToList();

watch.Stop();
Console.WriteLine(“Person: {0} – JsonTime First Call time: {1}”,
personName, watch.ElapsedMilliseconds);

for (int i = 1; i <= 10; i++)
{
watch.Reset(); watch.Start();
jdata = svcJson.Messages.ToList();
watch.Stop();
Console.WriteLine(“Person: {0} – Json Call {1} time:
{2}”, personName, 1 + i, watch.ElapsedMilliseconds);
}

Console.WriteLine(jdata.Count);
}
catch (Exception ex)
{
Console.WriteLine(personName + “: ” + ex.Message);
}
Thread.Sleep(100);
}

public static void AtomInvokation()
{
string personName = Thread.CurrentThread.Name;

try
{
Stopwatch watch = new Stopwatch();
watch.Start();
SimpleService.MessageService svc =
new SimpleService.MessageService(new Uri
(“https://abc.servicebus.windows.net/SimpleService/WcfDataService1&#8221;));
svc.SendingRequest += svc_SendingRequest;
var data = svc.Messages.ToList();

watch.Stop();
Console.WriteLine(“Person: {0} – XmlTime First Call time: {1}”,
personName, watch.ElapsedMilliseconds);

for (int i = 1; i <= 10; i++)
{
watch.Reset(); watch.Start();
data = svc.Messages.ToList();
watch.Stop();
Console.WriteLine(“Person: {0} – Xml Call {1} time:
{2}”, personName, 1 + i, watch.ElapsedMilliseconds);
}

Console.WriteLine(data.Count);
}
catch (Exception ex)
{
Console.WriteLine(personName + “: ” + ex.Message);
}
Thread.Sleep(100);
}
}9pt;”>

 

What I Test After That
I tested two separate scenarios:

Scenario I: Single user with small and large volume of data
Measuring the data transfer time periodically in XML format and then JSON format. You might notice that first call I’ve printed separately in each screen shot as it is taking additional time to connect to SB endpoint. In the first call, the secret key authentication is happening.

Small data set (array size 10): consume in XML format.

 

Consume in JSON format:

 

For small set of data, Json and XML response time over service bus relay is almost same.

Consuming Large volume of data (Array Size 100)

 

Here the XML message size is around 51 KB. Now I’m going to consume the same list of data (Array size 100) in JSON format.

 

So from the above test scenario, it is very clear that JSON response time is much faster than XML response time and the reason for that is message size. In this test, when I’m getting the list of 100 records in XML format message size is 51.2 KB but JSON message size is 4.4 KB.

Scenario II: 100 Concurrent user with large volume of data (array size 100)
In this concurrent user load test, I’ve done any service throttling or max concurrent connection configuration.

 

In the above screen shot, you will find some time out error that I’m getting in XML response. And it is happening due to high response time over relay. But when I execute the same test with JSON response, I found the response time is quite stable and faster than XML response and I’m not getting any time out.

 

How Easy to Use UseJson()
If you are using WCF Data Service 5.3 and above and VS2012 update 3, then to consume the JSON structure from the client, I have to instantiate the proxy / context with .Format.UseJson().

Here you don’t need to load the Edmx structure separately by writing any custom code. .NET CodeGen will generate that code when you add the service reference.

 

But if that code is not generated from your environment, then you have to write a few lines of code to load the edmx and use it as .Format.UseJson(LoadEdmx());

Sample Code for Loading Edmx

public static IEdmModel LoadEdmx(string srvName)
{
string executionPath = Directory.GetCurrentDirectory();
DirectoryInfo di = new DirectoryInfo(executionPath).Parent;
var parent1 = di.Parent;
var srv = parent1.GetDirectories(“Service References\\” +
srvName)[0].GetFiles(“service.edmx”)[0].FullName;

XmlDocument doc = new XmlDocument();
doc.Load(srv);
var xmlreader = XmlReader.Create(new StringReader(doc.DocumentElement.OuterXml));

IEdmModel edmModel = EdmxReader.Parse(xmlreader);
return edmModel;
}

Microsoft releases .Net 4.5.2 Framework and Developer Pack

You can download the releases now,

Image

We incorporated feedback we received for the .NET Framework 4.5.1 from different feedback sources to provide a faster release cadence. In this blog post we will talk about some of the new features we are delivering in the .NET Framework 4.5.2.

ASP.NET improvements

  • New HostingEnvironment.QueueBackgroundWorkItem method that lets you schedule small background work items. ASP.NET tracks these items and prevents IIS from abruptly terminating the worker process until all background work items have completed. These will enable ASP.NET applications to reliably schedule Async work items.
  • New HttpResponse.AddOnSendingHeaders and HttpResponseBase.AddOnSendingHeaders methods are more reliable and efficient than HttpApplication.PreSendRequestContent and HttpApplication.PreSendRequestHeaders. These APIs let you inspect and modify response headers and status codes as the HTTP response is being flushed to the client application. These reliability improvements minimize deadlocks and crashes between IIS and ASP.NET.
  • New HttpResponse.HeadersWritten and HttpResponseBase.HeadersWritten properties that return Boolean values to indicate whether the response headers have been written. You can use these properties to make sure that calls to APIs such as HttpResponse.StatusCode succeeds. This enables shared hosting scenarios for ASP.NET applications.

High DPI Improvements is an opt-in feature toenable resizing according to the system DPI settings for several glyphs or icons for the following Windows Forms controls: DataGridView, ComboBox, ToolStripComboBox, ToolStripMenuItem and Cursor. Here are examples of before and after views once this change is opted into.

.NET 4.5.1 Controls with High DPI setting .NET 4.5.2 Controls with High DPI setting
The red error glyph barely shows up and will eventually disappear with high scaling The red error glyph scales correctly.
The ToolStripMenu drop down arrow is barely visible, eventually won’t be usable with high scaling The drop down arrow in the ToolStripMenu scales correctly

Distributed transactions enhancement enables promotion of local transactions to Microsoft Distributed Transaction Coordinator (MSDTC) transactions without the use of another application domain or unmanaged code. This has a significant positive impact on the performance of distributed transactions.

More robust profiling with new profiling APIs that require dependent assemblies that are injected by the profiler to be loadable immediately, instead of being loaded after the app is fully initialized. This change does not affect users of the existing ICorProfiler APIs. Before this feature, diagnostics tools that do IL instrumentation via profiling API could cause unhandled exceptions to be thrown, unexpectedly terminating the process.

Improved activity tracing support in runtime and framework – The .NET Framework 4.5.2 enables out-of-process, Event Tracing for Windows (ETW)-based activity tracing for a larger surface area. This enables Application Performance Management vendors to provide lightweight tools that accurately track the costs of individual requests and activities that cross threads. These events are raised only when ETW controllers enable them.

For more information on usage of these features please refer to “What’s New in the .NET Framework 4.5.2”. Besides these features, there are many reliability and performance improvements across different areas of the .NET Framework.

Here are additional installers – pick package(s) most suitable for your needs based on your deployment scenario:

  • .NET Framework 4.5.2 Web Installer – A Bootstrapper that pulls in components based on the target OS/platform specs on which the .NET Framework is being deployed. Internet access is required.
  • .NET Framework 4.5.2 Offline Installer – The Full Package for offline deployments. Internet access is not required.
  • .NET Framework 4.5.2 Language Packs – Language specific support. You need to install the .NET Framework (language neutral) package before installing one or more language packs.
  • .NET Framework 4.5.2 Developer Pack – This will install .NET Framework Multi-targeting pack for building apps targeting .NET Framework 4.5.2 and also .NET Framework 4.5.2 runtime. Useful for build machines that need both the runtime and the multi-targeting pack

 

DRY Architecture, Layered Architecture, Domain Driven Design and a Framework to build great Single Web Pages – BiolerPlate Part 1

DRY – Don’t Repeat Yourself! is one of the main ideas of a good developer while developing a software. We’re trying to implement it from simple methods to classes and modules. What about developing a new web based application? We, software developers, have similar needs when developing enterprise web applications.

Enterprise web applications need login pages, user/role management infrastructure, user/application setting management, localization and so on. Also, a high quality and large scale software implements best practices such as Layered Architecture, Domain Driven Design (DDD), Dependency Injection (DI). Also, we use tools for Object-Releational Mapping (ORM), Database Migrations, Logging… etc. When it comes to the User Interface (UI), it’s not much different.

Starting a new enterprise web application is a hard work. Since all applications need some common tasks, we’re repeating ourselves. Many companies are developing their own Application Frameworks or Libraries for such common tasks to do not re-develop same things. Others are copying some parts of existing applications and preparing a start point for their new application. First approach is pretty good if your company is big enough and has time to develop such a framework.

As a software architect, I also developed such a framework im my company. But, there is some point it feels me bad: Many company repeats same tasks. What if we can share more, repeat less? What if DRY principle is implemented universally instead of per project or per company? It sounds utopian, but I think there may be a starting point for that!

What is ASP.NET Boilerplate?

http://www.aspnetboilerplate.com/

ASP.NET Boilerplate [1] is a starting point for new modern web applications using best practices and most popular tools. It’s aimed to be a solid model, a general-purpose application framework and a project template. What it does?

  • Server side
    • Based on latest ASP.NET MVC and Web API.
    • Implements Domain Driven Design (Entities, Repositories, Domain Services, Application Services, DTOs, Unif Of Work… and so on)
    • Implements Layered Architecture (Domain, Application, Presentation and Infrastructure Layers).
    • Provides an infrastructure to develop reusable and composable modules for large projects.
    • Uses most popular frameworks/libraries as (probably) you’re already using.
    • Provides an infrastructure and make it easy to use Dependency Injection (uses Castle Windsor as DI container).
    • Provides a strict model and base classes to use Object-Releational Mapping easily (uses NHibernate, can work with many DBMS).
    • Implements database migrations (uses FluentMigrator).
    • Includes a simple and flexible localization system.
    • Includes an EventBus for server-side global domain events.
    • Manages exception handling and validation.
    • Creates dynamic Web API layer for application services.
    • Provides base and helper classes to implement some common tasks.
    • Uses convention over configuration principle.
  • Client side
    • Provides two project templates. One for Single-Page Applications using Durandaljs, other one is a Multi-Page Application. Both templates uses Twitter Bootstrap.
    • Most used libraries are included by default: Knockout.js, Require.js, jQuery and some useful plug-ins.
    • Creates dynamic javascript proxies to call application services (using dynamic Web API layer) easily.
    • Includes unique APIs for some sommon tasks: showing alerts & notifications, blocking UI, making AJAX requests.

Beside these common infrastructure, the “Core Module” is being developed. It will provide a role and permission based authorization system (implementing ASP.NET Identity Framework), a setting systems and so on.

What ASP.NET Boilerplate is not?

ASP.NET Boilerplate provides an application development model with best practices. It has base classes, interfaces and tools that makes easy to build maintainable large-scale applications. But..

  • It’s not one of RAD (Rapid Application Development) tools those provide infrastructure for building applications without coding. Instead, it provides an infrastructure to code in best practices.
  • It’s not a code generation tool. While it has several features those build dynamic code in run-time, it does not generate codes.
  • It’s not a all-in-one framework. Instead, it uses well known tools/libraries for specific tasks (like NHibernate for O/RM, Log4Net for logging, Castle Windsor as DI container).

Getting started

In this article, I’ll show how to deleveop a Single-Page and Responsive Web Application using ASP.NET Boilerplate (I’ll call it as ABP from now). This sample application is named as “Simple Task System” and it consists of two pages: one for list of tasks, other one is to add new tasks. A Task can be related to a person, can be completed. The application is localized in two languages. Screenshot of Task List in the application is shown below:

A screenshot of 'Simple Task System'

Creating empty web application from template

ABP provides two templates to start a new project (Even if you can manually create your project and get ABP packages from nuget, template way is much more easy). Go to www.aspnetboilerplate.com/Templates to create your application from one of twotemplates (one for SPA (Single-Page Application), one for MPA (classic, Multi-Page Application) projects):

Creating template from ABP web site

I named my project as SimpleTaskSystem and created a SPA project. It downloaded project as a zip file. When I open the zip file, I see a solution is ready that contains assemblies (projects) for each layer of Domain Driven Design:

Project files

Created project’s runtime is .NET Framework 4.5.1, I advice to open with Visual Studio 2013. The only prerequise to be able to run the project is to create a database. SPA template assumes that you’re using SQL Server 2008 or later. But you can change it easily to another DBMS.

See the connection string in web.config file of the web project:

<add name="MainDb" connectionString="Server=localhost; Database=SimpleTaskSystemDb; Trusted_Connection=True;" />

You can change connection string here. I don’t change the database name, so I’m creating an empty database, named SimpleTaskSystemDb, in SQL Server:

Empty database

That’s it, your project is ready to run! Open it in VS2013 and press F5:

First run

Template consists of two pages: One for Home page, other is About page. It’s localized in English and Turkish. And it’s Single-Page Application! Try to navigate between pages, you’ll see that only the contents are changing, navigation menu is fixed, all scripts and styles are loaded only once. And it’s responsive. Try to change size of the browser.

Now, I’ll show how to change the application to a Simple Task System application layer by layer in the coming part 2

Microsoft BI and the new PowerQuery for Excel – How we empower users

Introduction to Microsoft Power Query for Excel

Microsoft Power Query for Excel enhances self-service business intelligence (BI) for Excel with an intuitive and consistent experience for discovering, combining, and refining data across a wide variety of sources including relational, structured and semi-structured, OData, Web, Hadoop, Azure Marketplace, and more. Power Query also provides you with the ability to search for public data from sources such as Wikipedia.

With Power Query 2.10, you can share and manage queries as well as search data within your organization. Users in the enterprise can find and use these shared queries (if it is shared with them) to use the underlying data in the queries for their data analysis and reporting. For more information about how to share queries, see Share Queries.

With Power Query, you can

  • Find and connect data across a wide variety of sources.
  • Merge and shape data sources to match your data analysis requirements or prepare it for further analysis and modeling by tools such as Power Pivot and Power View.
  • Create custom views over data.
  • Use the JSON parser to create data visualizations over Big Data and Azure HDInsight.
  • Perform data cleansing operations.
  • Import data from multiple log files.
  • Perform Online Search for data from a large collection of public data sources including Wikipedia tables, a subset of Microsoft Azure Marketplace, and a subset of Data.gov.
  • Create a query from your Facebook likes that render an Excel chart.
  • Pull data into Power Pivot from new data sources, such as XML, Facebook, and File Folders as refreshable connections.
  • With Power Query 2.10, you can share and manage queries as well as search data within your organization.

New updates for Power Query

The Power Query team has been busy adding a number of exciting new features to Power Query. You can download the update from this page.

New features for Power Query include the following, please read the rest of this blog post for specific details for each.

  • New Data Sources
    • Updated “Preview” functionality of the SAP BusinessObjects BI Universe connectivity
    • Access tables and named ranges in a workbook
  • Improvements to Query Load Settings
    • Customizable Defaults for Load Settings in the Options dialog
    • Automatic suggestion to load a query to the Data Model when it goes beyond the worksheet limit
    • Preserve data in the Data Model when you modify the Load to Worksheet setting of a query that is loaded to the Data Model
  • Improvements to Query Refresh behaviors in Excel
    • Preserve Custom Columns, Conditional Formatting and other customizations of worksheet tables
    • Preserve results from a previous query refresh when a new refresh attempt fails
  • New Transformations available in the Query Editor
    • Remove bottom rows
    • Fill up
    • New statistic operations in the Insert tab
  • Other Usability Improvements
    • Ability to reorder queries in the Workbook Queries pane
    • More discoverable way to cancel a preview refresh in the Query Editor
    • Keyboard support for navigation and rename in the Steps pane
    • Ability to view and copy errors in the Filter Column dropdown menu
    • Remove items directly from the Selection Well in the Navigator
    • Send a Frown for Service errors

Connect to SAP BusinessObjects BI Universe (Preview)

This connectivity has been a separate Preview feature for the last month or so. In this release, we are incorporating the SAP BusinessObjects BI Universe connector Preview capabilities as part of the main Power Query download for ease of access. With Microsoft Power Query for Excel, you can easily connect to an SAP BusinessObjects BI Universe to explore and analyze data across your enterprise.

Access tables and named ranges in an Excel workbook

With From Excel Workbook, you can now connect to tables and named ranges in your external workbook sheets. This simplifies the process of selecting useful data from an external workbook, which used to be limited to sheets and users had to “manually” scrape the data (using Query transform operations).

 

Customizable Defaults for Load Settings in the Options dialog

You can override the Power Query default Load Settings in the Options dialog. This will set the default Load Settings behavior for new queries in areas where Load Settings are not exposed directly to the user, such as in Online Search results and the Navigator task pane in single-table import mode. In addition, this will set the default state for Load Settings where these settings are available including the Query Editor, and Navigator in multi-table import mode.

           

Preserve Custom Columns, Conditional Formatting and other customizations of worksheet tables

With this Power Query Update, Custom Columns, conditional formatting in Excel, and other customizations of worksheet tables are preserved after you refresh a query. Power Query will preserve worksheet customizations such as Data Bars, Color Scales, Icon Sets or other value-based rules across refresh operations and after query edits.

Preserve results from a previous query refresh when a new refresh attempt fails

After a refresh fails, Power Query will now preserve the previous query results. This allows you to work with slightly older data in the worksheet or Data Model and lets you refresh the query results after fixing the cause of errors.

Automatic suggestion to load a query to the Data Model when it goes beyond the worksheet limits

When you are working with large volumes of data in your workbook, you could reach the limits of Excel’s worksheet size. When this occurs, Power Query will automatically recommend to load your query results to the Data Model. The Data Model can store very large data sets.

Preserve data in the Data Model when modifying the Load to Worksheet setting of a query that is loaded to the Data Model

With Power Query, data and annotations on the Data Model are preserved when modifying the Load to Worksheet setting of a query. Previously, Power Query would reset the query results in both the worksheet and the Data Model when modifying either one of the two load settings.      

Remove Bottom Rows

A very common scenario, especially when importing data from the Web and other semi-structured sources, is having to remove the last few rows of data because the contents do not belong to the data set. For instance, it’s common to remove links to previous/next pages or comments. Previously, this was possible only by using a composition of custom formulas in Power Query. This transformation is now much easier by adding a library function called Table.RemoveLastN(), and a button for this transformation in the Home tab of the Query Editor ribbon.

 

Fill Up

Power Query already supports the ability to fill down values in a column to neighboring empty cells. Starting with this update, you can now fill values up within a column as well. This new transformation is available as a new library function called Table.FillUp(), and a button on the Home tab of the Query Editor ribbon.

New Statistics operations in the Insert tab

The Insert tab provides various ways to insert new columns in queries, based on custom formulas or by deriving values based on other columns. You can now apply Statistics operations based on values from different columns, row by row, in their table.

 

Ability to reorder queries in the Workbook Queries pane

With the latest Power Query update, you can move queries up or down in the Workbook Queries pane. You can right-click on a query and select Move Up or Move Down to reorder queries.

More discoverable way of cancelling refresh of a preview in the Query Editor

The Cancel option is now much more discoverable inside the Query Editor dialog. In addition to the Refresh dropdown menu in the ribbon, this option can now be found in the status bar at the bottom right corner of the Query Editor, next to the download status information.

  

Keyboard support for navigation and rename in the Steps pane

You can now use the Up/Down Arrow keys to navigate between steps in your query. Also, press the F2 key to rename the current step.

Ability to view and copy errors in the Filter Column dropdown menu

You can easily view and copy error details inside the Filter Column menu. This is very useful to troubleshoot errors while retrieving filter values.

Remove items directly from the Selection Well in the Navigator

You can remove items directly from the Selection Well instead of having to find the original item in the Navigator tree to deselect it.

 

Send a Frown for Service errors

We try as hard as possible to improve the quality of Power Query and all of its features. Even then, there are cases in which errors can happen. You can now send a frown directly from experiences where a service error happened, for instance, an error retrieving a Search result preview or downloading a query from the Data Catalog. This will give us enough information about the service request that failed and the client state to troubleshoot the issue.

That’s all for this update! We hope that you enjoy these new Power Query features. Please don’t hesitate to contact us via the Power Query Forum or send us a smile/frown with any questions or feedback that you may have.

You can also follow these links to access more resources about Power Query and Power BI:

Great Agile Develipment Tool – Agile Planner

Great Agile Development project – /http://agileplanner.codeplex.com/

Project Description

This project is to develop an interation planning tool for agile project management.

What’s new?

Release: 1.0.0, run in Visual Studio integrated mode. See how to use for details.

What is Agile Planner?

This tool is for agile project teams, who currently are using sticky notes on the wall. With this tool stories, backlog and iterations are managed in a graphic designer, saved as files within visual studio projects and can be exported to images, reports and etc.

Main features are

  • Stories can be drag and dropped between backlog and iterations
  • Iteration’s capacity calculated automatically base on stories within it
  • Stories are rendered base on customizable status or priority color schema
  • Diagram can be exported to PNG image for printing, documentation and sharing

Here are examples.

Stories rendered based on status
agile-stories-on-status-s.png

Stories rendered base on priority
agile-stories-on-priority-s.png

 

How to use Agile Planner

1. Install
To install Agile Planner,

  • Download the runtime binary zip file from the latest releases
  • Extract all files from the runtime binary zip file
  • Run the windows installer AgilePlanner.msi (requires elevated command prompt under Vista and administrator on XP/2003).

2. Get Started
To start use Agile Planner in Visual Studio 2008 project

  • Start Visual Studio 2008, create new project or load existing project
  • Right click the project name and select menu “Add | New Item …”

add-new-item.PNG

  • Select AgilePlanner
  • Dismiss the security warning if it shows up

You will be presented a designing environment like below.

designer.PNG

  1. Graphical designer for iterations and stories
  2. Toolbox for iterations and stories
  3. Treeview Explorer for iterations and stories
  4. Property window for iterations and stories

3. Plan your project’s iterations

  • Create iterations by dragging iteration tool from toolbox to the graphical designer
  • Create Stories by dragging story tool from toolbox to the backlog and iterations
  • Edit stories’ properties such as name, capacity, priority and status the property window

add-stories.PNG

Notice: the capacity of iteration updated automatically after dragged stories between iterations and after updated the stories’ capacity property, so that you can balance the work load between iterations.

4. Render the diagram
The stories can be colored based on either their status or priority. To switch between these two options, right click the diagram and select menu “Color on Status” or “Color on Priority”. The color schema are customizable as the property of the project.

render-in-priority.PNG

5. Export
The rendered diagram can be exported to png file by right clicking on the diagram and select menu “Export to image”.

exported.png

How to use it?

See how to use

Introduction to Cloud Automation


Provision Azure Environment Resources


This is where we can see proof of evolution.

As you saw in the bulleted list of chronological blog posts (above), my first venture into Automating the Public Cloud leveraged Orchestrator + The Integration Pack for Windows Azure. My second releaseleveraged PowerShell and PowerShell Workflow + Windows Azure Cmdlets.

Let’s get down to the goods. And actually, for the first time in a long time, my published example came out a couple days before the blog post / teaser!


Script Center Contribution and Download

The download is the example: New-AzureEnvironmentResources.ps1

Here is a brief description:

This runbook creates a number of Azure Environment Resources (in sequence): Azure Affinity Group, Azure Cloud Service, Azure Storage Account, Azure Storage Container, Azure VM Image, and Azure VM. It also requires the Upload of a VHD to a specified storage container mid-process.

A detained Description, full set of Requirements, and the actual Runbook Contents are available within the Script Center Contribution (not to mention, the actual download).

Download the Provision Azure Environment Resources Example Runbook from Script Center here:

BC-DLButtonDark


A bit more about the Requirements…

Runbook Parameters

  • Azure Connection Name

    REQUIRED. Name of the Azure connection setting that was created in the Automation service.
        This connection setting contains the subscription id and the name of the certificate setting that
        holds the management certificate. It will be passed to the required and nested Connect-Azure runbook.

  • Project Name

    REQUIRED. Name of the Project for the deployment of Azure Environment Resources. This name is leveraged
        throughout the runbook to derive the names of the Azure Environment Resources created.

  • VM Name

    REQUIRED. Name of the Virtual Machine to be created as part of the Project.

  • VM Instance Size

    REQUIRED. Specifies the size of the instance. Supported values are as below with their (cores, memory)
        “ExtraSmall” (shared core, 768 MB),
        “Small”      (1 core, 1.75 GB),
        “Medium”     (2 cores, 3.5 GB),
        “Large”      (4 cores, 7 GB),
        “ExtraLarge” (8 cores, 14GB),
        “A5”         (2 cores, 14GB)
        “A6”         (4 cores, 28GB)
        “A7”         (8 cores, 56 GB)

  • Storage Account Name

    OPTIONAL. This parameter should only be set if the runbook is being re-executed after an existing
    and unique Storage Account Name has already been created, or if a new and unique Storage Account Name
    is desired. If left blank, a new and unique Storage Account Name will be created for the Project. The
    format of the derived Storage Account Names is:
        $ProjectName (lowercase) + [Random lowercase letters and numbers] up to a total Length of 23


Other Requirements

  • An existing connection to an Azure subscription

  • The Upload of a VHD to a specified storage container mid-process. At this point in the process, the runbook will intentionally suspend and notify the user; after the upload, the user simply resumes the runbook and the rest of the creation process continues.

  • Six (6) Automation Assets (to be configured in the Assets tab). These are suggested, but not necessarily required. Replacing the “Get-AutomationVariable” calls within this runbook with static or parameter variables is an alternative method. For this example though, the following dependencies exist:
        VARIABLES SET WITH AUTOMATION ASSETS:
             $AGLocation = Get-AutomationVariable -Name ‘AGLocation’
             $GenericStorageContainerName = Get-AutomationVariable -Name ‘GenericStorageContainer’
             $SourceDiskFileExt = Get-AutomationVariable -Name ‘SourceDiskFileExt’
             $VMImageOS = Get-AutomationVariable -Name ‘VMImageOS’
             $AdminUsername = Get-AutomationVariable -Name ‘AdminUsername’
             $Password = Get-AutomationVariable -Name ‘Password’

Note     The entire runbook is heavily checkpointed and can be run multiple times without resource recreation.


Upload of a VHD

Waaaaait a minute! That seems like a pretty big step, how am I going to accomplish that?

I am so glad you asked.

To make this easier (for all of us), I created a separate PowerShell Workflow Script to take care of this step. In fact, it is the same one I used during the creation and testing of New-AzureEnvironmentResources.ps1.

Here it is (the contents of a file I called Upload-LocalVHDtoAzure.ps1):

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
param
(
    [Parameter(Mandatory=$true)]
    [string]$AzureSubscriptionName,
    [Parameter(Mandatory=$true)]
    [string]$ProjectName,
    [Parameter(Mandatory=$true)]
    [string]$StorageAccountName
)

workflow Upload-LocalVHDtoAzure { 

    param 
    ( 
        [string]$StorageContainerName, 
        [string]$VHDName, 
        [string]$SourceVHDPath, 
        [string]$DestinationBlobURI, 
        [bool]$OverWrite 
    ) 
    
    $AzureSubscriptionForWorkflow = Get-AzureSubscription 

    $AzureBlob = Get-AzureStorageBlob -Container $StorageContainerName -Blob $VHDName -ErrorAction SilentlyContinue 
    
    if(!$AzureBlob -or $OverWrite) { 

        $AzureBlob = Add-AzureVhd -LocalFilePath $SourceVHDPath -Destination $DestinationBlobURI -OverWrite:$OverWrite 
    } 

    Return $AzureBlob 

}

$GenericStorageContainerName = “vhds”

$SourceDiskName = “toWindowsAzure” 
$SourceDiskFileExt = “vhd” 
$SourceDiskPath = “D:\Drop\Azure\toAzure” 
$SourceVHDName = “{0}.{1}” -f $SourceDiskName,$SourceDiskFileExt 
$SourceVHDPath = “{0}\{1}” -f $SourceDiskPath,$SourceVHDName 

$DesitnationVHDName = “{0}.{1}” -f $ProjectName,$SourceDiskFileExt 
$DestinationVHDPath = https://{0}.blob.core.windows.net/{1}” -f $StorageAccountName,$GenericStorageContainerName 
$DestinationBlobURI = “{0}/{1}” -f $DestinationVHDPath,$DesitnationVHDName 
$OverWrite = $false 

Select-AzureSubscription -SubscriptionName $AzureSubscriptionName
Set-AzureSubscription -SubscriptionName $AzureSubscriptionName -CurrentStorageAccount $StorageAccountName

$AzureBlobUploadJob = Upload-LocalVHDtoAzure -StorageContainerName $GenericStorageContainerName -VHDName $DesitnationVHDName `
    -SourceVHDPath $SourceVHDPath -DestinationBlobURI $DestinationBlobURI -OverWrite $OverWrite -AsJob 
Receive-Job -Job $AzureBlobUploadJob -AutoRemoveJob -Wait -WriteEvents -WriteJobInResults

Note     This is just one method of uploading a VHD to Azure for a specified Storage Account. I have parameterized the entire script so it could be run from the command line as a PS1 file. Obviously you can do with this as you please.

 


Testing and Proof of Execution

I figured you might want to see the results of my testing during my development of the Provision Azure Environment Resources example…so here are some screen captures from the Azure Automation interface:

Dashboard

image

Runbooks

image

Assets

image

Azure All Items View

You know, to prove that I created something with these scripts…

image

How To : Use Powershell and TFS together

The absolute basics

Where does a newbie to Windows PowerShell start—particularly in regards to TFS? There are a few obvious places. I’m hardly the first person to trip across the natural peanut-butter-and-chocolate nature of TFS and Windows PowerShell together. In fact, the TFS Power Tools contain a set of cmdlets for version control and a few other functions.

Image

There is one issue when downloading them, however. The “typical” installation of the Power Tools leaves out the Windows PowerShell cmdlets! So make sure you choose “custom” and select those Windows PowerShell cmdlets manually.

After they’re installed, you also might need to manually add them to Windows PowerShell before you can start using them. If you try Get-Help for one of the cmdlets and see nothing but an error message, you know you’ll need to do so (and not simply use Update-Help, as the error message implies).

Fortunately, that’s simple. Using the following command will fix the issue:

add-pssnapin Microsoft.TeamFoundation.PowerShell

See the before and after:

Image of command output

A better way to review what’s in the Power Tools and to get the full list of cmdlets installed by the TFS Power Tools is to use:

Get-Command -module Microsoft.TeamFoundation.PowerShell

This method doesn’t depend on the developers including “TFS” in all the cmdlet names. But as it happens, they did follow the Cmdlet Development Guidelines, so both commands return the same results.

Something else I realized when working with the TFS PowerShell cmdlets: for administrative tasks, like those I’m most interested in, you’ll want to launch Windows PowerShell as an administrator. And as long-time Windows PowerShell users already know, if you want to enable the execution of remote scripts, make sure that you set your script execution policy to RemoteSigned. For more information, see How Can I Write and Run a Windows PowerShell Script?.

Of all the cmdlets provided with the TFS Power Tools, one of my personal favorites is Get-TfsServer, which lets me get the instance ID of my server, among other useful things.  My least favorite thing about the cmdlets in the Power Tools? There is little to no useful information for TFS cmdlets in Get-Help. Awkward! (There’s a community bug about this if you want to add your comments or vote on it.)

A different favorite: Get-TFSItemHistory. His following example not only demonstrates the power of the cmdlets, but also some of their limitations:

Get-TfsItemHistory -HistoryItem . -Recurse -Stopafter 5 |

    ForEach-Object { Get-TfsChangeset -ChangesetNumber $_.ChangesetId } |

    Select-Object -ExpandProperty Changes |

    Select-Object -ExpandProperty Item

This snippet gets the last five changesets in or under the current directory, and then it gets the list of files that were changed in those changesets. Sadly, this example also highlights one of the shortcomings of the Power Tools cmdlets: Get-TfsItemHistory cannot be directly piped to Get-TfsChangeset because the former outputs objects with ChangesetId properties, and the latter expects a ChangesetNumber parameter.

One of the nice things is that raw TFS API objects are being returned, and the snap-ins define custom Windows PowerShell formatting rules for these objects. In the previous example, the objects are instances of VersionControl.Client.Item, but the formatting approximates that seen with Get-ChildItem.

So the cmdlets included in the TFS Power Tools are a good place to start if you’re just getting started with TFS and Windows PowerShell, but they’re somewhat limited in scope. Most of them are simply piping results of the tf.exe commands that are already available in TFS. You’ll probably find yourself wanting to do more than just work with these.

 

ASE 4.0 Availability

Admin Script Editor

We have overcome all the obstacles that were holding us back from this release, so we are finally ready to share our latest build of ASE 4.0. We have tested on Windows 7 and 8, 32-bit and 64-bit, and are not anticipating you will run into any problems. If you do, please make us aware of any issues you enoucter using our online ticket system. We will maintain a list of known issues here in the support knowledge base.

We are considering this a release candidate but is available to customers only (not as a trial version). In order to download this release you will need to be current with maintenance on your license. We realize that it has been a very long time since we had an official release and we made it clear we would honor any expired maintenance extensions once we released. We’ve decided to do better than that and have…

View original post 106 more words

XI/PI: Understanding the RFC Adapter

SAP XI provides different ways for SAP systems to communicate via SAP XI. You have three options namely IDoc Adapters, RFC Adapters and Proxies. In one of the earlier posts that spoke about your first XI scenario, we learned configuring the IDoc receiver adapter. And in the coming articles, I shall throw light on different adapters. This article specifically deals with understanding basics of RFC adapter on sender and the receiver side.
 
 Image
SAP XI provides different ways for SAP systems to communicate via SAP XI. You have three options namely IDoc Adapters, RFC Adapters and Proxies. In one of the earlier posts that spoke about your first XI scenario, we learned configuring the IDoc receiver adapter. And in the coming articles, I shall throw light on different adapters. This article specifically deals with understanding basics of RFC adapter on sender and the receiver side.

SAP XI RFC Sender AdapterRFC Adapter converts the incoming RFC calls to XML and XML messages to outgoing RFC calls. We can have both synchronous (sRFC) and asynchronous (tRFC) communication with SAP systems. The former works with Best Effort QoS (Quality of Service) while the later by Exactly Once (EO).

Unlike IDoc adapter, RFC Adapter is installed on the J2EE Adapter Engine and can be monitored via Adapter Monitoring and Communication Channel Monitoring in the Runtime Workbench.

Now let us understand the configuration needed to set up RFC communication.

RFC Sender Adapter

In this case, Sender SAP system requests XI Integration Engine to process RFC calls. This could either be synchronous or asynchronous.

On the source SAP system, go to transaction SM59 and create a new RFC connection of type ‘T’ (TCP/IP Connection). On the Technical Settings tab, select “Registered Server Program” radio button and specify an arbitrary Program ID. Note that the same program ID must be specified in the configuration of the sender adapter communication channel. Also note that this program ID is case-sensitive.

When using the RFC call in your ABAP program you should specify the RFC destination created above. For example,

CALL FUNCTION ‘<NAME_OF_THE_RFC_FUNCTION_MODULE>’
DESTINATION ‘<RFC_DESTINATION_NAME>’.

Also, in case you are setting up asynchronous interface, the RFC should be called in the background. For example,

CALL FUNCTION ‘<NAME_OF_THE_RFC_FUNCTION_MODULE>’
IN BACKGROUND TASK
DESTINATION ‘<RFC_DESTINATION_NAME>’.

SAP XI RFC Receiver AdapterNow, create the relevant communication channel in the XI Integration Directory. Select the Adapter Type as RFC Sender (Please see the figure above). Specify the Application server and Gateway service of the sender SAP system. Specify the program ID. Specify exactly the same program ID that you provided while creating the RFC destination in SAP system. Note that this program ID is case-sensitive. Provide Application server details and logon credentials in the RFC metadata repository parameter. Save and activate the channel. Note that the RFC definition that you import in the Integration Repository is used only at design time. At runtime, XI loads the metadata from the sender SAP system by using the credentials provided here.

RFC Receiver Adapter

In this case, XI sends the data in the RFC format (after conversion from XML format by the receiver adapter) to the target system where the RFC is executed.

Configuring the receiver adapter is even simpler. Create a communication channel in ID of type RFC Receiver (Please see the figure above on the left). Specify the RFC Client parameters like the Application server details, logon credentials etc and activate the channel.

Testing the Connectivity

Sometimes, especially when new SAP environments are setup, you may want to test their RFC connectivity to SAP XI before you create your actual RFC based interfaces/scenarios. There is a quick and easy way to accomplish this.

STFC_CONNECTION InputCreate a RFC destination of type ‘T’ in the SAP system as described previously. Then, go to XI Integration Repository and import the RFC Function Module STFC_CONNECTION from the SAP system. Activate your change list.

Configure sender and receiver communication channels in ID by specifying the relevant parameters of the SAP system as discussed previously. Remember that the Program ID in sender communication channel and RFC destination in SAP system must match (case-sensitive).

STFC_CONNECTION OutputAccordingly, complete the remaining ID configuration objects like Sender Agreement, Receiver Determination, Interface Determination and Receiver Agreement. No Interface mapping is necessary. Activate your change list.

Now, go back to the SAP system and execute the function module STFC_CONNECTION using transaction SE37. Specify the above RFC destination in ‘RFC target sys’ input box. You can specify any arbitrary input as REQUTEXT. If everything works fine, you should receive the same text as a response. You can also see two corresponding messages in SXMB_MONI transaction in SAP XI. This verifies the connection between SAP system and SAP XI.

How to : Use JQuery and JSON in MVC 5 for Autocomplete

Image

Imagine that you want to create edit view for Company entity which has two properties: Name (type string) and Boss (type Person). You want both properties to be editable. For Company.Name simple text input is enough but for Company.Boss you want to use jQuery UI Autocomplete widget. This widget has to meet following requirements:

  • suggestions should appear when user starts typing person’s last name or presses down arrow key;
  • identifier of person selected as boss should be sent to the server;
  • items in the list should provide additional information (first name and date of birth);
  • user has to select one of the suggested items (arbitrary text is not acceptable);
  • the boss property should be validated (with validation message and style set for appropriate input field).

Above requirements appear quite often in web applications. I’ve seen many over-complicated ways in which they were implemented. I want to show you how to do it quickly and cleanly… The assumption is that you have basic knowledge about jQuery UI Autocomplete and ASP.NET MVC. In this post I will show only the code which is related to autocomplete functionality but you can download full demo project here. It’s ASP.NET MVC 5/Entity Framework 6/jQuery UI 1.10.4 project created in Visual Studio 2013 Express for Web and tested in Chrome 34, FF 28 and IE 11 (in 11 and 8 mode).

So here are our domain classes:

public class Company
{
    public int Id { get; set; } 

    [Required]
    public string Name { get; set; }

    [Required]
    public Person Boss { get; set; }
}
public class Person
{
    public int Id { get; set; }

    [Required]
    [DisplayName("First Name")]
    public string FirstName { get; set; }
    
    [Required]
    [DisplayName("Last Name")]
    public string LastName { get; set; }

    [Required]
    [DisplayName("Date of Birth")]
    public DateTime DateOfBirth { get; set; }

    public override string ToString()
    {
        return string.Format("{0}, {1} ({2})", LastName, FirstName, DateOfBirth.ToShortDateString());
    }
}

Nothing fancy there, few properties with standard attributes for validation and good looking display. Person class has ToString override – the text from this method will be used in autocomplete suggestions list.

Edit view for Company is based on this view model:

public class CompanyEditViewModel
{    
    public int Id { get; set; }

    [Required]
    public string Name { get; set; }

    [Required]
    public int BossId { get; set; }

    [Required(ErrorMessage="Please select the boss")]
    [DisplayName("Boss")]
    public string BossLastName { get; set; }
}

Notice that there are two properties for Boss related data.

Below is the part of edit view that is responsible for displaying input field with jQuery UI Autocomplete widget for Boss property:

<div class="form-group">
    @Html.LabelFor(model => model.BossLastName, new { @class = "control-label col-md-2" })
    <div class="col-md-10">
        @Html.TextBoxFor(Model => Model.BossLastName, new { @class = "autocomplete-with-hidden", data_url = Url.Action("GetListForAutocomplete", "Person") })
        @Html.HiddenFor(Model => Model.BossId)
        @Html.ValidationMessageFor(model => model.BossLastName)
    </div>
</div>

form-group and col-md-10 classes belong to Bootstrap framework which is used in MVC 5 web project template – don’t bother with them. BossLastName property is used for label, visible input field and validation message. There’s a hidden input field which stores the identifier of selected boss (Person entity). @Html.TextBoxFor helper which is responsible for rendering visible input field defines a class and a data attribute. autocomplete-with-hidden class marks inputs that should obtain the widget. data-url attribute value is used to inform about the address of action method that provides data for autocomplete. Using Url.Action is better than hardcoding such address in JavaScript file because helper takes into account routing rules which might change.

This is HTML markup that is produced by above Razor code:

<div class="form-group">
    <label class="control-label col-md-2" for="BossLastName">Boss</label>
    <div class="col-md-10">
        <span class="ui-helper-hidden-accessible" role="status" aria-live="polite"></span>
        <input name="BossLastName" class="autocomplete-with-hidden ui-autocomplete-input" id="BossLastName" type="text" value="Kowalski" 
         data-val-required="Please select the boss" data-val="true" data-url="/Person/GetListForAutocomplete" autocomplete="off">
        <input name="BossId" id="BossId" type="hidden" value="4" data-val-required="The BossId field is required." data-val-number="The field BossId must be a number." data-val="true">
        <span class="field-validation-valid" data-valmsg-replace="true" data-valmsg-for="BossLastName"></span>
    </div>
</div>

This is JavaScript code responsible for installing jQuery UI Autocomplete widget:

$(function () {
    $('.autocomplete-with-hidden').autocomplete({
        minLength: 0,
        source: function (request, response) {
            var url = $(this.element).data('url');
   
            $.getJSON(url, { term: request.term }, function (data) {
                response(data);
            })
        },
        select: function (event, ui) {
            $(event.target).next('input[type=hidden]').val(ui.item.id);
        },
        change: function(event, ui) {
            if (!ui.item) {
                $(event.target).val('').next('input[type=hidden]').val('');
            }
        }
    });
})

Widget’s source option is set to a function. This function pulls data from the server by $.getJSON call. URL is extracted from data-url attribute. If you want to control caching or provide error handling you may want to switch to $.ajax function. The purpose of change event handler is to ensure that values for BossId and BossLastName are set only if user selected an item from suggestions list.

This is the action method that provides data for autocomplete:

public JsonResult GetListForAutocomplete(string term)
{               
    Person[] matching = string.IsNullOrWhiteSpace(term) ?
        db.Persons.ToArray() :
        db.Persons.Where(p => p.LastName.ToUpper().StartsWith(term.ToUpper())).ToArray();

    return Json(matching.Select(m => new { id = m.Id, value = m.LastName, label = m.ToString() }), JsonRequestBehavior.AllowGet);
}

value and label are standard properties expected by the widget. label determines the text which is shown in suggestion list, value designate what data is presented in the input filed on which the widget is installed. id is custom property for indicating which Person entity was selected. It is used in select event handler (notice the reference to ui.item.id): Selected ui.item.id is set as a value of hidden input field – this way it will be sent in HTTP request when user decides to save Company data.

Finally this is the controller method responsible for saving Company data:

public ActionResult Edit([Bind(Include="Id,Name,BossId,BossLastName")] CompanyEditViewModel companyEdit)
{
    if (ModelState.IsValid)
    {
        Company company = db.Companies.Find(companyEdit.Id);
        if (company == null)
        {
            return HttpNotFound();
        }

        company.Name = companyEdit.Name;

        Person boss = db.Persons.Find(companyEdit.BossId);
        company.Boss = boss;
        
        db.Entry(company).State = EntityState.Modified;
        db.SaveChanges();
        return RedirectToAction("Index");
    }
    return View(companyEdit);
}

Pretty standard stuff. If you’ve ever used Entity Framework above method should be clear to you. If it’s not, don’t worry. For the purpose of this post the important thing to notice is that we can use companyEdit.BossId because it was properly filled by model binder thanks to our hidden input field.

That’s it, all requirements are met! Easy, huh?

You may be wondering why I want to use jQuery UI widget in Visual Studio 2013 project which by default uses Twitter Bootstrap. It’s true that Bootstrap has some widgets and plugins but after a bit of experimentation I’ve found that for some more complicated scenarios jQ UI does a better job. The set of controls is simply more mature…

How To : Use SharePoint Dashboards & MSRS Reports for your Agile Development Life Cycle

The Problem We Solve

Agile BI is not a term many would associate with MSRS Reports and SharePoint Dashboards. While many organizations first turn to the Microsoft BI stack because of its familiarity, stitching together Microsoft’s patchwork of SharePoint, SQL Server, SSAS, MSRS, and Office creates administrative headaches and requires considerable time spent integrating and writing custom code.

This Showcase outlines the ease of accomplishing three of the most fundamental BI tasks with LogiXML technology as compared to MSRS and SharePoint:

  • Building a dashboard with multiple data sources
  • Creating interactive reports that reduce the load on IT by providing users self-service
  • Integrating disparate data sources

Read below to learn how an agile BI methodology can make your life much easier when it comes to dashboards and reports. Don’t feel like reading?

Building a Dashboard with LogiXML vs. MSRS + SharePoint

Microsoft’s only solution for dashboards is to either write your own code from scratch, manipulate SharePoint to serve a purpose for which it wasn’t initially designed, or look to third party apps. Below are some of the limitations to Microsoft’s approach to dashboards:

  • Limited Pre-Built Elements: Microsoft components come with only limited libraries of pre-built elements. In addition to actual development work, you will need to come up with an idea of how everything will work together. This necessitates becoming familiar with best practices in dashboards and reporting.
  • Sophisticated Development Expertise Required: While Microsoft components provide basic capabilities, anything more sophisticated is development resource-intensive and requires you to take on design, execution, and delivery. Any complex report visualizations and logic, such as interactive filters, must be written in code by the developer.
  • Limited Charts and Visualizations: Microsoft has a smaller sub-set of charts and visualization tools. If you want access to the complete library of .NET-capable charts, you’ll still need to OEM another charting solution at additional expense.
  • Lack of Integrated Workflow: Microsoft does not include workflow features sets out of the box in their BI offering.

LogiXML technology is centered on Logi Studio: an elemental, agile BI design environment which lets you simply choose from hundreds of powerful and configurable pre-built elements. Logi’s pre-built elements equip developers with tools to speed development, as well as the processes and logic required to build and manage BI projects. Below is a screen shot of the Logi Studio while building new dashboards.

agile-bi.jpg

Start a free LogiXML trial now.

Logi developers can easily create static or user-customizable dashboards using the Dashboard element. A dashboard is a collection of panels containing Logi reports, which in turn contain table, charts, images, etc. At runtime, the user can customize the dashboard by rearranging these panels on the browser page, by showing or hiding them, and even by changing their contents using adjustable reporting criteria. The data displayed within the panels can be configured, as in any Logi report, to link to other reports, providing drill-down functionality.

 

logi2.jpg

The dashboard displayed above has tabs and user customization enabled. The Dashboard element provides customization features, such as drag-and-drop panel positioning, support for built-in parameters the user can access to adjust the panel’s data contents, and a panel selection list that determines which panels will be displayed. AJAX techniques are utilized for web server interactions, allowing selective updates of portions of the dashboard. Dashboard customizations can be saved on an individual-user basis to create a highly personalized view of the data.

The Dashboard Wizard

The ‘Create a Dashboard’ wizard assists developers in creating dashboards by populating the report definition with the necessary dashboard-related elements. You can easily point to any data source by selecting from a variety of DataLayer types, including SQL, StoredProcedures, Web Services, Files, and more. A simple to use drag and drop SQL Query builder is also integrated, to offer a guided approach to constructing queries when connecting to your database.

logi3.jpg

Using the Dashboard Element

The Dashboard element is used to create the top level structure for all of your interactive panels within the final output. Under your dashboards, you can optionally add any number of Dashboard Panels, Panel Parameters for dynamic filtering, and even automatic refresh features with AJAX-based refresh timers.

logi4.jpg

Changing Appearance Using Themes and Style Sheets

The appearance of a dashboard can be changed easily by assigning a theme to your report. In addition, or as an alternative, you can change dashboard appearance using style. The Dashboard element has its own Cascading Style Sheet (CSS) file containing predefined classes that affect the display colors, font sizes, button labels, and spacing seen when the dashboard is displayed. You can override these classes by adding classes with the same name to your own style sheet file.

See us build a BI app with 3 data sources in under 10 minutes.

Ad Hoc Reporting Creation with LogiXML: Analysis Grid

The Analysis Grid is a managed reporting feature giving end users virtual ad hoc capability. It is an easy to use tool that allows business users to analyze and manipulate data and outputs in multiple and powerful ways.

logi5.jpg

Start a free LogiXML trial now.

Create an Analysis Grid by using the “Create Analysis Grid” wizard, or by simply adding the AnalysisGrid element into your definition file. Like the dashboard, data for the Analysis Grid can be accessed from any of the data options, including SQL databases, web sources, or files. You also have the option to launch the interactive query builder wizard for easy, drag-drop, SQL query creation.

The Analysis Grid is composed of three main parts: the data grid itself, i.e. a table of data to be analyzed; various action buttons at the top, allowing the user to perform actions such as create new columns with custom calculations, sort columns, add charts, and perform aggregations; and the ability to export the grid to Excel, CSV, or PDF format.

The Analysis Grid makes it easy to perform what-if analyses through features like filtering. The Grid also makes data-presentation impactful through visualization features including data driven color formatting, inline gauges, and custom formula creation.

Ad Hoc Reporting Creation with Microsoft

While simple ad hoc capabilities, such as enabling the selection of parameters like date ranges, can be accomplished quickly and easily with Microsoft, more sophisticated ad hoc analysis is challenging due to the following shortcomings.

Platform Integration Problems

Microsoft BI strategy is not unified and is strongly tied to SQL Server. To obtain analysis capabilities, you must build cubes through to the Analysis Service, which is a separate product with its own different security architecture. Next, you will need to build reports that talk to SQL server, also using separate products.

Dashboards require a SharePoint portal which is, again, a separate product with separate requirements and licensing. If you don’t use this, you must completely code your dashboards from scratch. Unfortunately, Microsoft Reporting Services doesn’t play well with Analysis Services or SharePoint since these were built on different technologies.

SharePoint itself offers an out of the box portal and dashboard solution but unfortunately with a number of significant shortcomings. SharePoint was designed as a document management and collaboration tool as opposed to an interactive BI dashboard solution. Therefore, in order to have a dashboard solution optimized for BI, reporting, and interactivity you are faced with two options:

  • Build it yourself using .NET and a combination of third party components
  • Buy a separate third party product

Many IT professionals find these to be rather unappealing options, since they require evaluating a new product or components, and/or a lot of work to build and make sure it integrates with the rest of the Microsoft stack.

Additionally, while SQL Server and other products support different types of security architectures, Analysis Services only has support for using integrated Windows NT security models to access cubes and therefore creates integration challenges.

Moreover, for client/ad hoc tools, you need Report Writer, a desktop product, or Excel – another desktop application. In addition to requiring separate licenses, these products don’t even talk to one another in the same ways, as they were built by different companies and subsequently acquired by Microsoft.

Each product requires a separate and often disconnected development environment with different design and administration features. Therefore to manage Microsoft BI, you must have all of these development environments available and know how to use them all.

Integration of Various Data Sources: LogiXML vs. Microsoft

LogiXML is data neutral, allowing you to easily connect to all of your organization’s data spread across multiple applications and databases. You can connect with any data source or data model and even combine data sources such as current data accessed through a web service with past data in spreadsheets.

Integration of Various Data Sources with Microsoft

Working with Microsoft components for BI means you will be faced with the challenge of limited support for non-Microsoft based databases and outside data sources. The Microsoft BI stack is centered on SQL Server databases and therefore the data source is optimized to work with SQL Server. Unfortunately, if you need outside content it can be very difficult to integrate.

Finally, Microsoft BI tools are designed with the total Microsoft experience in mind and are therefore optimized for Internet Explorer. While other browsers and devices might be useable, the experience isn’t optimized and may potentially lack in features or visualize differently.

 

Free & Licensed Windows 8, Azure, Office 365, SharePoint On-Premise and Online Tools, Web Parts, Apps available.
For more detail visit https://sharepointsamurai.wordpress.com or contact me at tomas.floyd@outlook.com

Building Distributed Node.js Apps with Azure Service Bus Queue

Azure Service Bus Queues provides, a queue based, brokered messaging communication between apps, which lets developers build distributed apps on the Cloud and also for hybrid Cloud environments. Azure Service Bus Queue provides First In, First Out (FIFO) messaging infrastructure. Service Bus Queues can be leveraged for communicating between apps, whether the apps are hosted on the cloud or on-premises servers.

windows-azure-c3634

Service Bus Queues are primarily used for distributing application logic into multiple apps. For an example, let’s say, we are building an order processing app where we are building a frontend web app for receiving orders from customers and want to move order processing logic into a backend service where we can implement the order processing logic in an asynchronous manner. In this sample scenario, we can use Azure Service Bus Queue, where we can create an order processing message into Service Bus Queue from frontend app for processing the request. From the backend order processing app, we can receives the message from Queue and can process the request in an efficient manner. This approach also enables better scalability as we can separately scale-up frontend app and backend app.

For this sample scenario, we can deploy the frontend app onto Azure Web Role and backend app onto Azure Worker Role and can separately  scale-up both Web Role and Worker Role apps. We can also use Service Bus Queues for hybrid Cloud scenarios where we can communicate between apps hosted on Cloud and On-premises servers.    

Using Azure Service Bus Queues in Node.js Apps

In order to working with Azure Service Bus, we need to create a Service Bus namespace from Azure portal.

image

We can take the connection information of Service Bus namespace from the Connection Information tab in the bottom section, after choosing the Service Bus namespace.

image

Creating the Service Bus Client

Firstly, we need to install npm module azure to working with Azure services from Node.js app.

npm install azure

The code block below creates a Service Bus client object using the Node.js module azure.

var azure = require('azure');
var config=require('./config');

var serviceBusClient = azure.createServiceBusService(config.sbConnection);

We create the Service Bus client object by using createServiceBusService method of azure. In the above code block, we pass the Service Bus connection info from a config file. The azure module can also read the environment variables AZURE_SERVICEBUS_NAMESPACE and AZURE_SERVICEBUS_ACCESS_KEY for information required to connect with Azure Service Bus where we can call  createServiceBusService method without specifying the connection information.

Creating a Services Bus Queue

The createQueueIfNotExists method of Service Bus client object, returns the queue if it is already exists, or create a new Queue if it is not exists.

var azure = require('azure');
var config=require('./config');
var queue = 'ordersqueue';

var serviceBusClient = azure.createServiceBusService(config.sbConnection);

function createQueue() {
 serviceBusClient.createQueueIfNotExists(queue,
 function(error){
    if(error){
        console.log(error);
    }
    else
    {
        console.log('Queue ' + queue+ ' exists');
    }
});
}

Sending Messages to Services Bus Queue

The below function sendMessage sends a given message to Service Bus Queue

function sendMessage(message) {
    serviceBusClient.sendQueueMessage(queue,message,
 function(error) {
        if (error) {
            console.log(error);
        }
        else
        {
            console.log('Message sent to queue');
        }
    });
}

The following code create the queue and sending a message to Queue by calling the methods createQueue and sendMessage which we have created in the previous steps.

createQueue();
var orderMessage={
 "OrderId":101,
 "OrderDate": new Date().toDateString()
};
sendMessage(JSON.stringify(orderMessage));

We create a JSON object with properties OrderId and OrderDate and send this to the Service Bus Queue. We can send these messages to Queue for communicating with other apps where the receiver apps can read the messages from Queue and perform the application logic based on the messages we have provided.

Receiving Messages from Services Bus Queue

Typically, we will be receive the Service Bus Queue messages from a backend app. The code block below receives the messages from Service Bus Queue and extracting information from the JSON data.

var azure = require('azure');
var config=require('./config');
var queue = 'ordersqueue';

var serviceBusClient = azure.createServiceBusService(config.sbConnection);
function receiveMessages() {
    serviceBusClient.receiveQueueMessage(queue,
      function (error, message) {
        if (error) {
            console.log(error);
        } else {
            var message = JSON.parse(message.body);
            console.log('Processing Order# ' + message.OrderId
                + ' placed on ' + message.OrderDate);
        }
    });
}

By default, the messages will be deleted from Service Bus Queue after reading the messages. This behaviour can be changed by specifying the optional parameter isPeekLock as true as sown in the below code block.

function receiveMessages() {
    serviceBusClient.receiveQueueMessage(queue,{ isPeekLock: true },
      function (error, message) {
        if (error) {
            console.log(error);
        } else {
            var message = JSON.parse(message.body);
          console.log('Processing Order# ' + message.OrderId
                + ' placed on ' + message.OrderDate);
          serviceBusService.deleteMessage(message,
        function (deleteError){
            if(!deleteError){
                console.delete('Message deleted from Queue');
             }
           }
          });
        }
    });
}

Here the message will not be automatically deleted from Queue and we can explicitly delete the messages from Queue after reading and successfully implementing the application logic.

Hadoop : The Basics

Problems with conventional database system

While a large number of CPU cores can be placed in a single server, it’s not feasible to deliver input data (especially big data) to these cores fast enough for processing. Using hard drives that can individually sustain read speeds of approx. 100 MB/s, and 4 independent I/O channels, a 4 TB data set would take over 2 days to read. Thus a distributed system with many servers working in problem is necessary in the big data domain.

Solution: Apache Hadoop Framework

The Apache Hadoop framework supports distributed processing of large data sets using a cluster of commodity hardware that can scale up to thousands of machines. Each node in the cluster offers local computation and storage and is assumed to be prone to failures. It’s designed to detect and handle failures at the application layer, and therefore transparently delivers a highly-available service without the need for expensive hardware or complex programming. Performing distributed computing on large volumes of data has been done before, what sets Hadoop apart is its simplified programming model for client applications and seamless handling of distribution of data and work across the cluster.

 Architecture of Hadoop

Let’s begin by looking the basic architecture of Hadoop. A typical Hadoop multi-machine cluster consists of one or two “master” nodes (running NameNode and JobTracker processes), and many “slave” or “worker” nodes (running TaskTracker and DataNode processes) spread across many racks.  Two main components of the Hadoop framework are described below – a distributed file system to store large amounts of data, and a computational paradigm called MapReduce.

Hadoop multi node cluster

Hadoop Distributed File System (HDFS)

Since the complete data set is unlikely to fit on a single computer’s hard drive, a distributed file system which breaks up input data and stores it on different machines in the cluster is needed. Hadoop Distributed File System (HDFS) is a distributed and scalable file system which is included in the Hadoop framework. It is designed to store a very large amount of information (terabytes or petabytes) reliably and is optimized for long sequential streaming reads rather than random access into the files. HDFS also provides data location awareness (such as the name of the rack or the network switch where a node is). Reliability is achieved by replicating the data across multiple nodes in the cluster rather than traditional means such as RAID storage. The default replication value is 3, so data is stored on three nodes – two on the same rack, and one on a different rack. Thus a single machine failure does not result in any data being unavailable.

Individual machines in the cluster that store blocks of an individual files are referred to as DataNodes. DataNodes communicate with each other to rebalance data, and re-replicate it in response to system failures. The Hadoop framework schedules processes on the DataNodes that operate on the local subset of data (moving computation to data instead of the other way around), so data is read from local disk into the CPU without network transfers achieving high performance.

The metadata for the file system is stored by a single machine called the NameNode. The large block size and low amount of metadata per file allows NameNode to store all of this information in the main memory, allowing fast access to the metadata from clients. To open a file, a client contacts the NameNode, retrieves a list of DataNodes that contain the blocks that comprise the file, and then reads the file data in bulk directly from the DataNode servers in parallel, without directly involving the NameNode. A secondary NameNode is used to avoid a single point of failure, it regularly connects to the primary NameNode and builds snapshots of the directory information.

The Windows Azure HDInsight Service supports HDFS for storing data, but also uses an alternative approach called Azure Storage Vault (ASV) which provides a seamless HDFS interface to Azure Blob Storage, a general purpose Windows Azure storage solution that does not co-locate compute with storage, but offers other benefits. In our next blog, we will explore the HDInsight service in more detail.

MapReduce Programming Model

Hadoop programs must be written to conform to the “MapReduce” programming model which is designed for processing large volumes of data in parallel by dividing the work into a set of independent tasks. The records are initially processed in isolation by tasks called Mappers, and then their output is then brought together into a second set of tasks called Reducer as shown below.

MapReduce Process

MapReduce input comes from files loaded in the processing cluster in HDFS. The client applications submit MapReduce jobs to the JobTracker node which divides and pushes work out to available TaskTracker nodes in the cluster while trying  to keep the work as close to the data as possible. Hadoop internally manages the cluster topology issues as the rack-aware HDFS file system enables the JobTracker to know which nodes contain the data, and which other machines are nearby. If the work cannot be hosted on one of the node where the data resides, priority is given to nodes in the same rack. This reduces the data moved across the network.

When the mapping phase completes, the intermediate (key, value) pairs are exchanged between machines to send all values with the same key to a single reducer. The reduce tasks are spread across

the same nodes in the cluster as the mappers. This data transfer is taken care of by the Hadoop infrastructure (guided by the different keys and their associated values) without the individual map or reduce tasks communicating or being aware of one another’s existence. A heartbeat is sent from the TaskTracker to the JobTracker frequently to update its status. If any node or TaskTracker in the cluster fails or times out, that part of the job is rescheduled by the underlying Hadoop layer without any explicit action by the workers. The TaskTracker on each node is spawned off in a separate Java Virtual Machine process to prevent the TaskTracker itself from failing if the running job crashes the JVM. User-level tasks do not communicate explicitly with one another and workers continue to operate leaving the challenging aspects of partially restarting the program to the underlying Hadoop layer. Thus Hadoop distributed system is very reliable and fault tolerant.

Hadoop also has a very flat scalability curve. A Hadoop program requires no recoding to work on a much larger data set by using a larger cluster of machines.  Hadoop is designed for work that is batch-oriented rather than real-time in nature (due to the overhead involved in starting MapReduce programs), is very data-intensive, and lends itself to processing pieces of data in parallel. This includes use cases such as log or clickstream analysis, sophisticated data mining, web crawling indexing, archiving data for compliance etc.

Select Master Page App for SharePoint 2013 now available!! (Get the SharePoint 2010 Select Master Page Web Part Free)

In Publishing sites, there will be a layouts or application page through which we can set a custom
or another master page as a default master page. Unfortunately, this is missing in Team Sites.

This is what this solution is all about. It is targeted mainly for Team sites, since publishing sites already have a provision.

It adds a custom ribbon button in the Share and Track group of the Files group of Master Page Gallery. This is a SharePoint 2013 Hosted App. Refer the documentation for the technical details.

 

The following screen shots depict the functionality.







 

The custom ribbon button will not be enabled if a folder is selected or more than 1 item is selected.
But if a file is selected, the button will be enabled, irrespective of the file extension. Upon selecting a file and clicking on the ribbon button, a pop up dialog will appear with the text “Working on it..”.

Then a confirmation alert will appear, asking “Are you sure?”. Once confirmed by the user, a progress message will be displayed in the pop up dialog. If the file selected is not of .master extension, then the user will be displayed an alert “This will work only for master pages.”.

If a master page, which is already set as default, is selected and the ribbon button is clicked, the user will be displayed an alert “The file at <url> is the current default master page. So please select another master page.”. If another master page is selected, then the user will be displayed an alert “Master Page Changed Successfully.

Please press CTRL + F5 for changes to reflect.”. Once the user clicks OK on the alert, the pop up dialog also closes and pressing CTRL + F5 will reflect the updated master page. Any time, the user clicks OK or cancel on the alert screens, the parent screen will be refreshed and the current selection will be cleared.

The app requires a Full Control on the host web, since this is required for setting the master page and thats precisely the reason why, I couldn’t publish this in the Office store.

The app has been tested on IE9 and the latest version of Chrome and Firefox. It may not work on IE8 or lower version of other browsers also, in case they don’t support HTML5. Also, the app currently supports only English. Also, the app will set the default master only on the host web (where the app is installed) and not on the sub webs.

The app uses jQuery AJAX and REST APIs of SharePoint 2013.

To use the app, just upload the app (.app file) to the App Catalog and add/install it to the host team site and trust it and navigate to the Master Page Gallery and you are good to go.

 

With this App, you will also receive the FREE SharePoint 2010 Select Master Page Web Part!!

It adds a custom ribbon button in the Share and Track group of the Documents group of Master Page Gallery.

It is a Sandbox solution and it is implemented to set the master of only the root site of a site collection, though it can be customized / extended for sub sites. It requires a user to be at least a Site owner to avoid unnecessary manipulation of master page by contributors or other users. Refer the documentation for the technical details.

The following screen shots depict the functionality.





 

 

Using Word Automation Services and OpenXML to Change Document Formats

 
There are some tasks that are difficult when using the Welcome to the Open XML SDK 2.0 for Microsoft Office, such as repagination, conversion to other document formats such as PDF, or updating of the table of contents, fields, and other dynamic content in documents. Word Automation Services is a new feature of SharePoint 2010 that can help in these scenarios. It is a shared service that provides unattended, server-side conversion of documents into other formats, and some other important pieces of functionality. It was designed from the outset to work on servers and can process many documents in a reliable and predictable manner.Image

 

Using Word Automation Services, you can convert from Open XML WordprocessingML to other document formats. For example, you may want to convert many documents to the PDF format and spool them to a printer or send them by e-mail to your customers. Or, you can convert from other document formats (such as HTML or Word 97-2003 binary documents) to Open XML word-processing documents.

In addition to the document conversion facilities, there are other important areas of functionality that Word Automation Services provides, such as updating field codes in documents, and converting altChunk content to paragraphs with the normal style applied. These tasks can be difficult to perform using the Open XML SDK 2.0. However, it is easy to use Word Automation Services to do them. In the past, you used Word Automation Services to perform tasks like these for client applications. However, this approach is problematic. The Word client is an application that is best suited for authoring documents interactively, and was not designed for high-volume processing on a server. When performing these tasks, perhaps Word will display a dialog box reporting an error, and if the Word client is being automated on a server, there is no user to respond to the dialog box, and the process can come to an untimely stop. The issues associated with automation of Word are documented in the Knowledge Base article Considerations for Server-side Automation of Office.

This scenario describes how you can use Word Automation Services to automate processing documents on a server.

  • An expert creates some Word template documents that follow specific conventions. She might use content controls to give structure to the template documents. This provides a good user experience and a reliable programmatic approach for determining the locations in the template document where data should be replaced in the document generation process. These template documents are typically stored in a SharePoint document library.

  • A program runs on the server to merge the template documents together with data, generating a set of Open XML WordprocessingML (DOCX) documents. This program is best written by using the Welcome to the Open XML SDK 2.0 for Microsoft Office, which is designed specifically for generating documents on a server. These documents are placed in a SharePoint document library.

  • After generating the set of documents, they might be automatically printed. Or, they might be sent by e-mail to a set of users, either as WordprocessingML documents, or perhaps as PDF, XPS, or MHTML documents after converting them from WordprocessingML to the desired format.

  • As part of the conversion, you can instruct Word Automation Services to update fields, such as the table of contents.

Using the Welcome to the Open XML SDK 2.0 for Microsoft Office together with Word Automation Services enables you to create rich, end-to-end solutions that perform well and do not require automation of the Word client application.

One of the key advantages of Word Automation Services is that it can scale out to your needs. Unlike the Word client application, you can configure it to use multiple processors. Further, you can configure it to load balance across multiple servers if your needs require that.

Another key advantage is that Word Automation Services has perfect fidelity with the Word client applications. Document layout, including pagination, is identical regardless of whether the document is processed on the server or client.

Supported Source Document Formats

The supported source document formats for documents are as follows.

  1. Open XML File Format documents (.docx, .docm, .dotx, .dotm)

  2. Word 97-2003 documents (.doc, .dot)

  3. Rich Text Format files (.rtf)

  4. Single File Web Pages (.mht, .mhtml)

  5. Word 2003 XML Documents (.xml)

  6. Word XML Document (.xml)

Supported Destination Document Formats

The supported destination document formats includes all of the supported source document formats, and the following.

  1. Portable Document Format (.pdf)

  2. Open XML Paper Specification (.xps)

Other Capabilities of Word Automation Services

In addition to the ability to load and save documents in various formats, Word Automation Services includes other capabilities.

You can cause Word Automation Services to update the table of contents, the table of authorities, and index fields. This is important when generating documents. After generating a document, if the document has a table of contents, it is an especially difficult task to determine document pagination so that the table of contents is updated correctly. Word Automation Services handles this for you easily.

Open XML word-processing documents can contain various field types, which enables you to add dynamic content into a document. You can use Word Automation Services to cause all fields to be recalculated. For example, you can include a field type that inserts the current date into a document. When fields are updated, the associated content is also updated, so that the document displays the current date at the location of the field.

One of the powerful ways that you can use content controls is to bind them to XML elements in a custom XML part. See the article, Building Document Generation Systems from Templates with Word 2010 and Word 2007 for an explanation of bound content controls, and links to several resources to help you get started. You can replace the contents of bound content controls by replacing the XML in the custom XML part. You do not have to alter the main document part. The main document part contains cached values for all bound content controls, and if you replace the XML in the custom XML part, the cached values in the main document part are not updated. This is not a problem if you expect users to view these generated documents only by using the Word client. However, if you want to process the WordprocessingML markup more, you must update the cached values in the main document part. Word Automation Services can do this.

Alternate format content (as represented by the altChunk element) is a great way to import HTML content into a WordprocessingML document. The article, Building Document Generation Systems from Templates with Word 2010 and Word 2007 discusses alternate format content, its uses, and provides links to help you get started. However, until you open and save a document that contains altChunk elements, the document contains HTML, and not ordinary WordprocessingML markup such as paragraphs, runs, and text elements. You can use Word Automation Services to import the HTML (or other forms of alternative content) and convert them to WordprocessingML markup that contains familiar WordprocessingML paragraphs that have styles.

You can also convert to and from formats that were used by previous versions of Word. If you are building an enterprise-class application that is used by thousands of users, you may have some users who are using Word 2007 or Word 2003 to edit Open XML documents. You can convert Open XML documents so that they contain only the markup and features that are used by either Word 2007 or Word 2003.

Limitations of Word Automation Services

Word Automation Services does not include capabilities for printing documents. However, it is straightforward to convert WordprocessingML documents to PDF or XPS and spool them to a printer.

A question that sometimes occurs is whether you can use Word Automation Services without purchasing and installing SharePoint Server 2010. Word Automation Services takes advantage of facilities of SharePoint 2010, and is a feature of it. You must purchase and install SharePoint Server 2010 to use it. Word Automation Services is in the standard edition and enterprise edition.

By default, Word Automation Services is a service that installs and runs with a stand-alone SharePoint Server 2010 installation. If you are using SharePoint 2010 in a server farm, you must explicitly enable Word Automation Services.

To use it, you use its programming interface to start a conversion job. For each conversion job, you specify which files, folders, or document libraries you want the conversion job to process. Based on your configuration, when you start a conversion job, it begins a specified number of conversion processes on each server. You can specify the frequency with which it starts conversion jobs, and you can specify the number of conversions to start for each conversion process. In addition, you can specify the maximum percentage of memory that Word Automation Services can use.

The configuration settings enable you to configure Word Automation Services so that it does not consume too many resources on SharePoint servers that are part of your important infrastructure. The settings that you must use are dictated by how you want to use the SharePoint Server. If it is only used for document conversions, you want to configure the settings so that the conversion service can consume most of your processor time. If you are using the conversion service for low priority background conversions, you want to configure accordingly.

 
 

In addition to writing code that starts conversion processes, you can also write code to monitor the progress of conversions. This lets you inform users or post alert results when very large conversion jobs are completed.

Word Automation Services lets you configure four additional aspects of conversions.

  1. You can limit the number of file formats that it supports.

  2. You can specify the number of documents converted by a conversion process before it is restarted. This is valuable because invalid documents can cause Word Automation Services to consume too much memory. All memory is reclaimed when the process is restarted.

  3. You can specify the number of times that Word Automation Services attempts to convert a document. By default, this is set to two so that if Word Automation Services fails in its attempts to convert a document, it attempts to convert it only one more time (in that conversion job).

  4. You can specify the length of elapsed time before conversion processes are monitored. This is valuable because Word Automation Services monitors conversions to make sure that conversions have not stalled.

Unless you have installed a server farm, by default, Word Automation Services is installed and started in SharePoint Server 2010. However, as a developer, you want to alter its configuration so that you have a better development experience. By default, it starts conversion processes at 15 minute intervals. If you are testing code that uses it, you can benefit from setting the interval to one minute. In addition, there are scenarios where you may want Word Automation Services to use as much resources as possible. Those scenarios may also benefit from setting the interval to one minute.

To adjust the conversion process interval to one minute

  1. Start SharePoint 2010 Central Administration.

  2. On the home page of SharePoint 2010 Central Administration, Click Manage Service Applications.

  3. In the Service Applications administration page, service applications are sorted alphabetically. Scroll to the bottom of the page, and then click Word Automation Services . If you are installing a server farm and have installed Word Automation Services manually, whatever you entered for the name of the service is what you see on this page.

  4. In the Word Automation Services administration page, configure the conversion throughput field to the desired frequency for starting conversion jobs.

  5. As a developer, you may also want to set the number of conversion processes, and to adjust the number of conversions per worker process. If you adjust the frequency with which conversion processes start without adjusting the other two values, and you attempt to convert many documents, you make the conversion process much less efficient. The best value for these numbers should take into consideration the power of your computer that is running SharePoint Server 2010.

  6. Scroll to the bottom of the page and then click OK.

Because Word Automation Services is a service of SharePoint Server 2010, you can only use it in an application that runs directly on a SharePoint Server. You must build the application as a farm solution. You cannot use Word Automation Services from a sandboxed solution.

A convenient way to use Word Automation Services is to write a web service that you can use from client applications.

However, the easiest way to show how to write code that uses Word Automation Services is to build a console application. You must build and run the console application on the SharePoint Server, not on a client computer. The code to start and monitor conversion jobs is identical to the code that you would write for a Web Part, a workflow, or an event handler. Showing how to use Word Automation Services from a console application enables us to discuss the API without adding the complexities of a Web Part, an event handler, or a workflow.

Important note Important

Note that the following sample applications call Sleep(Int32) so that the examples query for status every five seconds. This would not be the best approach when you write code that you intend to deploy on production servers. Instead, you want to write a workflow with delay activity.

To build the application

  1. Start Microsoft Visual Studio 2010.

  2. On the File menu, point to New, and then click Project.

  3. In the New Project dialog box, in the Recent Template pane, expand Visual C#, and then click Windows.

  4. To the right side of the Recent Template pane, click Console Application.

  5. By default, Visual Studio creates a project that targets .NET Framework 4. However, you must target .NET Framework 3.5. From the list at the upper part of the File Open dialog box, select .NET Framework 3.5.

  6. In the Name box, type the name that you want to use for your project, such as FirstWordAutomationServicesApplication.

  7. In the Location box, type the location where you want to place the project.

    Figure 1. Creating a solution in the New Project dialog box

    Creating solution in the New Project box

  8. Click OK to create the solution.

  9. By default, Visual Studio 2010 creates projects that target x86 CPUs, but to build SharePoint Server applications, you must target any CPU.

  10. If you are building a Microsoft Visual C# application, in Solution Explorer window, right-click the project, and then click Properties.

  11. In the project properties window, click Build.

  12. Point to the Platform Target list, and select Any CPU.

    Figure 2. Target Any CPU when building a C# console application

    Changing target to any CPU

  13. If you are building a Microsoft Visual Basic .NET Framework application, in the project properties window, click Compile.

    Figure 3. Compile options for a Visual Basic application

    Compile options for Visual Basic applications

  14. Click Advanced Compile Options.

    Figure 4. Advanced Compiler Settings dialog box

    Advanced Compiler Settings dialog box

  15. Point to the Platform Target list, and then click Any CPU.

  16. To add a reference to the Microsoft.Office.Word.Server assembly, on the Project menu, click Add Reference to open the Add Reference dialog box.

  17. Select the .NET tab, and add the component named Microsoft Office 2010 component.

    Figure 5. Adding a reference to Microsoft Office 2010 component

    Add reference to Microsoft Office 2010 component

  18. Next, add a reference to the Microsoft.SharePoint assembly.

    Figure 6. Adding a reference to Microsoft SharePoint

    Adding reference to Microsoft SharePoint

The following examples provide the complete C# and Visual Basic listings for the simplest Word Automation Services application.

 
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.SharePoint;
using Microsoft.Office.Word.Server.Conversions;

class Program
{
    static void Main(string[] args)
    {
        string siteUrl = "http://localhost";
        // If you manually installed Word automation services, then replace the name
        // in the following line with the name that you assigned to the service when
        // you installed it.
        string wordAutomationServiceName = "Word Automation Services";
        using (SPSite spSite = new SPSite(siteUrl))
        {
            ConversionJob job = new ConversionJob(wordAutomationServiceName);
            job.UserToken = spSite.UserToken;
            job.Settings.UpdateFields = true;
            job.Settings.OutputFormat = SaveFormat.PDF;
            job.AddFile(siteUrl + "/Shared%20Documents/Test.docx",
                siteUrl + "/Shared%20Documents/Test.pdf");
            job.Start();
        }
    }
}
NoteNote

Replace the URL assigned to siteUrl with the URL to the SharePoint site.

To build and run the example

  1. Add a Word document named Test.docx to the Shared Documents folder in the SharePoint site.

  2. Build and run the example.

  3. After waiting one minute for the conversion process to run, navigate to the Shared Documents folder in the SharePoint site, and refresh the page. The document library now contains a new PDF document, Test.pdf.

In many scenarios, you want to monitor the status of conversions, to inform the user when the conversion process is complete, or to process the converted documents in additional ways. You can use the ConversionJobStatus class to query Word Automation Services about the status of a conversion job. You pass the name of the WordServiceApplicationProxy class as a string (by default, “Word Automation Services”), and the conversion job identifier, which you can get from the ConversionJob object. You can also pass a GUID that specifies a tenant partition. However, if the SharePoint Server farm is not configured for multiple tenants, you can pass null (Nothing in Visual Basic) as the argument for this parameter.

After you instantiate a ConversionJobStatus object, you can access several properties that indicate the status of the conversion job. The following are the three most interesting properties.

ConversionJobStatus Properties

Property

Return Value

Count

Number of documents currently in the conversion job.

Succeeded

Number of documents successfully converted.

Failed

The number of documents that failed conversion.

Whereas the first example specified a single document to convert, the following example converts all documents in a specified document library. You have the option of creating all converted documents in a different document library than the source library, but for simplicity, the following example specifies the same document library for both the input and output document libraries. In addition, the following example specifies that the conversion job should overwrite the output document if it already exists.

 

Console.WriteLine("Starting conversion job");
ConversionJob job = new ConversionJob(wordAutomationServiceName);
job.UserToken = spSite.UserToken;
job.Settings.UpdateFields = true;
job.Settings.OutputFormat = SaveFormat.PDF;
job.Settings.OutputSaveBehavior = SaveBehavior.AlwaysOverwrite;
SPList listToConvert = spSite.RootWeb.Lists["Shared Documents"];
job.AddLibrary(listToConvert, listToConvert);
job.Start();
Console.WriteLine("Conversion job started");
ConversionJobStatus status = new ConversionJobStatus(wordAutomationServiceName,
    job.JobId, null);
Console.WriteLine("Number of documents in conversion job: {0}", status.Count);
while (true)
{
    Thread.Sleep(5000);
    status = new ConversionJobStatus(wordAutomationServiceName, job.JobId,
        null);
    if (status.Count == status.Succeeded + status.Failed)
    {
        Console.WriteLine("Completed, Successful: {0}, Failed: {1}",
            status.Succeeded, status.Failed);
        break;
    }
    Console.WriteLine("In progress, Successful: {0}, Failed: {1}",
        status.Succeeded, status.Failed);
}

To run this example, add some WordprocessingML documents in the Shared Documents library. When you run this example, you see output similar to this code snippet,

 
Starting conversion job
Conversion job started
Number of documents in conversion job: 4
In progress, Successful: 0, Failed: 0
In progress, Successful: 0, Failed: 0
Completed, Successful: 4, Failed: 0

You may want to determine which documents failed conversion, perhaps to inform the user, or take remedial action such as removing the invalid document from the input document library. You can call the GetItems() method, which returns a collection of ConversionItemInfo() objects. When you call the GetItems() method, you pass a parameter that specifies whether you want to retrieve a collection of failed conversions or successful conversions. The following example shows how to do this.

 

Console.WriteLine("Starting conversion job");
ConversionJob job = new ConversionJob(wordAutomationServiceName);
job.UserToken = spSite.UserToken;
job.Settings.UpdateFields = true;
job.Settings.OutputFormat = SaveFormat.PDF;
job.Settings.OutputSaveBehavior = SaveBehavior.AlwaysOverwrite;
SPList listToConvert = spSite.RootWeb.Lists["Shared Documents"];
job.AddLibrary(listToConvert, listToConvert);
job.Start();
Console.WriteLine("Conversion job started");
ConversionJobStatus status = new ConversionJobStatus(wordAutomationServiceName,
    job.JobId, null);
Console.WriteLine("Number of documents in conversion job: {0}", status.Count);
while (true)
{
    Thread.Sleep(5000);
    status = new ConversionJobStatus(wordAutomationServiceName, job.JobId, null);
    if (status.Count == status.Succeeded + status.Failed)
    {
        Console.WriteLine("Completed, Successful: {0}, Failed: {1}",
            status.Succeeded, status.Failed);
        ReadOnlyCollection<ConversionItemInfo> failedItems =
            status.GetItems(ItemTypes.Failed);
        foreach (var failedItem in failedItems)
            Console.WriteLine("Failed item: Name:{0}", failedItem.InputFile);
        break;
    }
    Console.WriteLine("In progress, Successful: {0}, Failed: {1}", status.Succeeded,
        status.Failed);
}

To run this example, create an invalid document and upload it to the document library. An easy way to create an invalid document is to rename the WordprocessingML document, appending .zip to the file name. Then delete the main document part (known as document.xml), which is in the Word folder of the package. Rename the document, removing the .zip extension so that it contains the normal .docx extension.

When you run this example, it produces output similar to the following.

 
 
Starting conversion job
Conversion job started
Number of documents in conversion job: 5
In progress, Successful: 0, Failed: 0
In progress, Successful: 0, Failed: 0
In progress, Successful: 4, Failed: 0
In progress, Successful: 4, Failed: 0
In progress, Successful: 4, Failed: 0
Completed, Successful: 4, Failed: 1
Failed item: Name:http://intranet.contoso.com/Shared%20Documents/IntentionallyInvalidDocument.docx

Another approach to monitoring a conversion process is to use event handlers on a SharePoint list to determine when a converted document is added to the output document library.

In some situations, you may want to delete the source documents after conversion. The following example shows how to do this. 

Console.WriteLine("Starting conversion job");
ConversionJob job = new ConversionJob(wordAutomationServiceName);
job.UserToken = spSite.UserToken;
job.Settings.UpdateFields = true;
job.Settings.OutputFormat = SaveFormat.PDF;
job.Settings.OutputSaveBehavior = SaveBehavior.AlwaysOverwrite;
SPFolder folderToConvert = spSite.RootWeb.GetFolder("Shared Documents");
job.AddFolder(folderToConvert, folderToConvert, false);
job.Start();
Console.WriteLine("Conversion job started");
ConversionJobStatus status = new ConversionJobStatus(wordAutomationServiceName,
    job.JobId, null);
Console.WriteLine("Number of documents in conversion job: {0}", status.Count);
while (true)
{
    Thread.Sleep(5000);
    status = new ConversionJobStatus(wordAutomationServiceName, job.JobId, null);
    if (status.Count == status.Succeeded + status.Failed)
    {
        Console.WriteLine("Completed, Successful: {0}, Failed: {1}",
            status.Succeeded, status.Failed);
        Console.WriteLine("Deleting only items that successfully converted");
        ReadOnlyCollection<ConversionItemInfo> convertedItems =
            status.GetItems(ItemTypes.Succeeded);
        foreach (var convertedItem in convertedItems)
        {
            Console.WriteLine("Deleting item: Name:{0}", convertedItem.InputFile);
            folderToConvert.Files.Delete(convertedItem.InputFile);
        }
        break;
    }
    Console.WriteLine("In progress, Successful: {0}, Failed: {1}",
        status.Succeeded, status.Failed);
}
 
 
Console.WriteLine("Starting conversion job")
Dim job As ConversionJob = New ConversionJob(wordAutomationServiceName)
job.UserToken = spSite.UserToken
job.Settings.UpdateFields = True
job.Settings.OutputFormat = SaveFormat.PDF
job.Settings.OutputSaveBehavior = SaveBehavior.AlwaysOverwrite
Dim folderToConvert As SPFolder = spSite.RootWeb.GetFolder("Shared Documents")
job.AddFolder(folderToConvert, folderToConvert, False)
job.Start()
Console.WriteLine("Conversion job started")
Dim status As ConversionJobStatus = _
    New ConversionJobStatus(wordAutomationServiceName, job.JobId, Nothing)
Console.WriteLine("Number of documents in conversion job: {0}", status.Count)
While True
    Thread.Sleep(5000)
    status = New ConversionJobStatus(wordAutomationServiceName, job.JobId, _
                                     Nothing)
    If status.Count = status.Succeeded + status.Failed Then
        Console.WriteLine("Completed, Successful: {0}, Failed: {1}", _
                          status.Succeeded, status.Failed)
        Console.WriteLine("Deleting only items that successfully converted")
        Dim convertedItems As ReadOnlyCollection(Of ConversionItemInfo) = _
            status.GetItems(ItemTypes.Succeeded)
        For Each convertedItem In convertedItems
            Console.WriteLine("Deleting item: Name:{0}", convertedItem.InputFile)
            folderToConvert.Files.Delete(convertedItem.InputFile)
        Next
        Exit While
    End If
    Console.WriteLine("In progress, Successful: {0}, Failed: {1}",
                      status.Succeeded, status.Failed)
End While

The power of using Word Automation Services becomes clear when you use it in combination with the Welcome to the Open XML SDK 2.0 for Microsoft Office. You can programmatically modify a document in a document library by using the Welcome to the Open XML SDK 2.0 for Microsoft Office, and then use Word Automation Services to perform one of the difficult tasks by using the Open XML SDK. A common need is to programmatically generate a document, and then generate or update the table of contents of the document. Consider the following document, which contains a table of contents.

Figure 7. Document with a table of contents

Document with table of contents

Let’s assume you want to modify this document, adding content that should be included in the table of contents. This next example takes the following steps.

  1. Opens the site and retrieves the Test.docx document by using a Collaborative Application Markup Language (CAML) query.

  2. Opens the document by using the Open XML SDK 2.0, and adds a new paragraph styled as Heading 1 at the beginning of the document.

  3. Starts a conversion job, converting Test.docx to TestWithNewToc.docx. It waits for the conversion to complete, and reports whether it was converted successfully.

 
Console.WriteLine("Querying for Test.docx");
SPList list = spSite.RootWeb.Lists["Shared Documents"];
SPQuery query = new SPQuery();
query.ViewFields = @"<FieldRef Name='FileLeafRef' />";
query.Query =
  @"<Where>
      <Eq>
        <FieldRef Name='FileLeafRef' />
        <Value Type='Text'>Test.docx</Value>
      </Eq>
    </Where>";
SPListItemCollection collection = list.GetItems(query);
if (collection.Count != 1)
{
    Console.WriteLine("Test.docx not found");
    Environment.Exit(0);
}
Console.WriteLine("Opening");
SPFile file = collection[0].File;
byte[] byteArray = file.OpenBinary();
using (MemoryStream memStr = new MemoryStream())
{
    memStr.Write(byteArray, 0, byteArray.Length);
    using (WordprocessingDocument wordDoc =
        WordprocessingDocument.Open(memStr, true))
    {
        Document document = wordDoc.MainDocumentPart.Document;
        Paragraph firstParagraph = document.Body.Elements<Paragraph>()
            .FirstOrDefault();
        if (firstParagraph != null)
        {
            Paragraph newParagraph = new Paragraph(
                new ParagraphProperties(
                    new ParagraphStyleId() { Val = "Heading1" }),
                new Run(
                    new Text("About the Author")));
            Paragraph aboutAuthorParagraph = new Paragraph(
                new Run(
                    new Text("Eric White")));
            firstParagraph.Parent.InsertBefore(newParagraph, firstParagraph);
            firstParagraph.Parent.InsertBefore(aboutAuthorParagraph,
                firstParagraph);
        }
    }
    Console.WriteLine("Saving");
    string linkFileName = file.Item["LinkFilename"] as string;
    file.ParentFolder.Files.Add(linkFileName, memStr, true);
}
Console.WriteLine("Starting conversion job");
ConversionJob job = new ConversionJob(wordAutomationServiceName);
job.UserToken = spSite.UserToken;
job.Settings.UpdateFields = true;
job.Settings.OutputFormat = SaveFormat.Document;
job.AddFile(siteUrl + "/Shared%20Documents/Test.docx",
    siteUrl + "/Shared%20Documents/TestWithNewToc.docx");
job.Start();
Console.WriteLine("After starting conversion job");
while (true)
{
    Thread.Sleep(5000);
    Console.WriteLine("Polling...");
    ConversionJobStatus status = new ConversionJobStatus(
        wordAutomationServiceName, job.JobId, null);
    if (status.Count == status.Succeeded + status.Failed)
    {
        Console.WriteLine("Completed, Successful: {0}, Failed: {1}",
            status.Succeeded, status.Failed);
        break;
    }
}

After running this example with a document similar to the one used earlier in this section, a new document is produced, as shown in Figure 8.

Figure 8. Document with updated table of contents

Document with updated table of contents

The Open XML SDK 2.0 is a powerful tool for building server-side document generation and document processing systems. However, there are aspects of document manipulation that are difficult, such a document conversions, and updating of fields, table of contents, and more. Word Automation Services fills this gap with a high-performance solution that can scale out to your requirements. Using the Open XML SDK 2.0 in combination with Word Automation Services enables many scenarios that are difficult when using only the Open XML SDK 2.0.

Common Techniques in Responsive Web Design

In this article, I’ll dive into some of the most common practices for building responsive site layouts and experiences. I’ll describe the emerging and available techniques for site layouts that flexibly resize based on screen real estate (referred to as “fluid grids”) so as to ensure that users get complete experiences across whatever screen size they are using. Additionally, I’ll show how to present rich media, especially images, and how developers can ensure that visitors on small-screen devices do not incur additional bandwidth costs for high-quality media.

Image

As you play with some of the techniques I describe, here are a few ways to test what your site looks like on different devices resolutions:

  1. Benjamin Keen has a responsive Web design bookmarklet that you can add to your Favorites bar (or Bookmarks bar) on your browser of choice. You can click on this bookmarklet to test your site layout in different resolutions.
  2. If you’re using Windows 8, you can always test your page layout on Internet Explorer 10 by employing the Windows 8 snap modes. In Windows 8, you can use Internet Explorer 10 on your full screen (full mode), or you can multitask by docking the browser to snap mode, where it emulates the layout characteristics of a smart phone browser. Additionally, you can dock the browser into fill mode, where it occupies 1024 x 768 pixels (px) on a default Windows 8 screen size of 1366 x 768 px. This is a great proxy for how your site will look on iPad screens as well as traditional 4:3 screens.
  3. Lastly, you’ll probably do a lot of what you see in Figure 1 (image courtesy Reddit.com).

Basic Testing for Responsive Web Design
Figure 1. Basic Testing for Responsive Web Design

Media Queries

Traditionally, developers have relied on sniffing out the browser’s user-agent string to identify whether a user is visiting a site from a PC or a mobile device. Often, after doing so, they redirect users to different subsites that serve up virtually the same content but with different layout and information design. For example, in the past, users who visited MSN.com could see the traditional PC experience or get hardware-specific mobile experiences by being redirected to http://m.msn.com.

But redirections require two separate engineering efforts. Also, this approach was optimized for two screen layouts (mobile with 320-px width and desktop with 1024-px width). It did not intelligently provide a great experience for users visiting from intermediate device sizes (such as tablets) as well as users with significantly larger screens.

CSS3 looks to help Web developers separate content creation (their page markup and functionality in HTML and JavaScript) from the presentation of that content and handle layout for different dimensions entirely within CSS via the introduction of media queries.

A media query is a way for a developer to write a CSS3 stylesheet and declare styles for all UI elements that are conditional to the screen size, media type and other physical aspects of the screen. With media queries, you can declare different styles for the same markup by asking the browser about relevant factors, such as device width, device pixel density and device orientation.

But even with CSS3, it’s very easy to fall into the trap of building three different fixed-width layouts for the same Web page markup to target common screen dimensions today (desktop, tablet and phone). However, this is not truly responsive Web design and doesn’t provide an optimal experience for all devices. Media queries are one part of the solution to providing truly responsive Web layout; the other is content that scales proportionally to fill the available screen. I’ll address this later.

What Does “Pixel” Mean Anymore?

The pixel has been used for Web design and layout for some time now and has traditionally referred to a single point on the user’s screen capable of displaying a red-blue-green dot. Pixel-based Web design has been the de facto way of doing Web layout, for declaring the dimensions of individual elements of a Web page as well as for typography. This is primarily because most sites employ images in their headers, navigation and other page UI elements and pick a site layout with a fixed pixel width in which their images look great.

However, the recent emergence of high-pixel-density screens and retina displays has added another layer of meaning to this term. In contemporary Web design, a pixel (that is, the hardware pixel we just discussed) is no longer the single smallest point that can be rendered by a screen.

Visit a Web site on your iPhone4, and its 640 x 960 px hardware screen will tell your browser that it has 320 x 480 px. This is probably for the best, since you don’t want a 640-px wide column of text fitted into a screen merely 2 inches wide. But what the iPhone screen and other high-density devices highlight is that we’re not developing for the hardware pixel anymore.

The W3C defines a reference pixel as the visual angle of 1 px on a device with 96 ppi density at an arm’s length distance from the reader. Complicated definitions aside, all this means is that when you design for modern-day, high-quality screens, your media queries will respond to reference pixels, also referred to as CSS pixels. The number of CSS pixels is often going to be less than the number of hardware pixels, which is a good thing! (Beware: hardware pixels are what most device-manufacturers use to advertise their high-quality phones, slates and retina displays—they’ll lead your CSS astray.)

This ratio of hardware pixels to CSS pixels is called device pixel ratio. A higher device pixel ratio just means that each CSS pixel is being rendered by more hardware pixels, which makes your text and layout look sharper.

Wikipedia has a great list of recent displays by pixel density, which includes device pixel ratio. You can always use CSS3 media queries to identify the device pixel ratio if you must, as so:

  1. /*Note that the below property device pixel ratio might need to be vendor-prefixed
  2.  for some browsers*/
  3. @media screen and (device-pixel-ratio: 1.5)
  4. {
  5.   /*adjust your layout for 1.5 hardware pixels to each reference pixel*/
  6. }
  7. @media screen and (device-pixel-ratio: 2)
  8. {
  9.   /*adjust your layout, font-sizes etc. for 2 hardware pixels to each reference pixel*/
  10. }

There are also some open source libraries that let developers calculate device pixel ratio using JavaScript across browsers, such as GetDevicePixelRatio by Tyson Matanich. Note that this result is available only in JavaScript, but it can be used to optimize image downloads so that high-quality images (with larger file sizes) are not downloaded on nonretina displays.

However, it is not recommended that you use device pixel ratio to define your page and content layout. While the reference pixel vs. hardware pixel disparity can be confusing, it’s easy to understand why this is crucial in offering users a better experience. An iPhone 3GS and iPhone 4 have approximately the same physical screen size and have identical use patterns, so it stands to reason that a block of text should have approximately the same physical size.

Similarly, just because you have an HDTV with a 1920 x 1080 p screen, this doesn’t mean sites should render content at this native resolution. Users sit several feet away from their TV and also use imprecise input mechanisms (such as joysticks) to interact with it, which is why it’s preferred that TV browsers pack multiple hardware pixels into a reference pixel. As long as you’ve designed your site with a 960-px wide layout for desktop browsers, the site will look comparable and be usable, regardless of whether your TV is 1080 p or an older model with 720 p.

As a general rule of thumb, your text content will look better on these devices. However, your image content may look pixelated and blurry. Thus, from a practical perspective, device pixel ratio matters most when you’re trying to serve high-quality photography/image data to your users on high-pixel-density screens. Moreover, you want to make sure that your brand logo looks sharp on your users’ fancy new phones. Later in this article, I’ll talk about techniques for implementing responsive images and point to some existing JavaScript libraries that can address this.

As we continue, I’ll use the term pixel to mean reference pixel and explicitly call out hardware pixel as needed.

Scaling Your Site Layout Responsively

Grid-based layout is a key component of Web site design. Most sites you visit can easily be visualized as a series of rectangles for page components such as headers, site navigation, content, sidebars, footers and so on.

Ideally, when we design responsive sites, we want to make the grid layout agnostic of the user’s screen size. This means we want our layout and content to scale to as much screen real estate as is available (within reason), instead of providing two or three fixed-width layouts.

Mobile-First Design

As I pointed out in the first article of this series, more than 12 percent of the world’s Web traffic comes from mobile devices. This fraction is significantly higher in nations with higher smartphone penetration and is expected to increase notably in the next few years as adoption picks up in Asia, Latin America and Africa.

Additionally, taking a mobile-first approach to site design is a good exercise in information design. Basically, it helps you prioritize the content and functionality that you want to make available on the mobile version of a site and then progressively enhance the site layout for larger devices. This way you ensure that your users have a valuable experience on their mobile devices—not just an afterthought to their desktop experience—and you can take advantage of additional real estate when available to provide a more visually engaging experience as well as navigation to additional “tier-two” content.

Case Study: The Redesigned Microsoft.com

When you visit microsoft.com on a mobile phone or narrow your PC browser width (with screen width under 540 px), you see a single hero image as part of a touch-friendly, visually rich slide show advertising one product at a time. (See Figure 2.) The top products are highlighted in the Discover section. Additional navigation is below the fold or in an accordion-style menu that is collapsed by default and is exposed when the user taps the list icon. Similarly, the search box is hidden by default to conserve screen real estate—the user needs to tap the search icon. This way, the mobile layout shows top products and links one below the other and only requires vertical panning. Content and product links below the fold are prioritized from top to bottom.

Microsoft.com as Designed for Mobile Phones
Figure 2. Microsoft.com as Designed for Mobile Phones

Once the width of the viewport exceeds 540 px (at which point we can assume that the user is no longer viewing the site on a phone but on a tablet or a low-resolution PC), you notice the first layout change (Figure 3). The search box is now visible by default, as is the top-level navigation, which was previously hidden under the list icon. The products in the Discover section are now presented in a single line, since they fit in the available width. Most importantly, notice that in this transition the hero image always occupies the available width of the screen.

Microsoft.com After Exceeding 540 px
Figure 3. Microsoft.com After Exceeding 540 px

The next layout change, shown in Figure 4, occurs at a width of 640 px or higher. As always, the hero image takes up all available screen width. The individual items within the For Work section are laid out side-by-side. The additional real estate also allows the caption for the hero image to be presented in line with the image and with motion, which is very eye-catching.

Layout Change at 640 px or Higher
Figure 4.Layout Change at 640 px or Higher

The last layout change occurs at screen widths of 900 px and higher (Figure 5). The Discover menu floats to the left to take advantage of available horizontal space, which reduces the need for vertical scrolling.

Layout at Screen Widths of 900 px and Higher
Figure 5. Layout at Screen Widths of 900 px and Higher

Finally, and most importantly, the page layout—especially the hero image—continues to scale and fill the available horizontal real estate (until 1600 px) so as to maximize the impact of the visual eye-candy (Figure 6). In fact, this is the case for all screen widths between 200 px and 1600 px—there is never any wasted whitespace on the sides of the hero image. (Similarly, the relative layouts of the Discover and For Work sections don’t change, but the images scale proportionally as well.)

Maximizing Impact at Higher Resolutions
Figure 6. Maximizing Impact at Higher Resolutions

Techniques for Responsive Layout

Great, so how do we implement this kind of experience? Generally, the adaptive layout for Web sites boils down to two techniques:

  • Identify break points where your layout needs to change.
  • In between break points, scale content proportionally.

Let’s examine these techniques independently.

Scaling Content Proportionally Between Break Points

As pointed out in the evaluation of microsoft.com, the relative layout of the header, hero image, navigation area and content area on the home page do not change for a screen width of 900 px or higher. This is valuable because when users visit the site on a 1280 x 720 monitor, they are not seeing a 900-px wide Web site with more than 25 percent of the screen going to whitespace in the right and left margins.

Similarly, two users might visit the site, one with an iPhone 4 with 480 x 320 px resolution (in CSS pixels) and another using a Samsung Galaxy S3 with 640 x 360 px resolution. For any layout with a width less than 512 px, microsoft.com scales down the layout proportionally so that for both users the entire mobile browser is devoted to Web content and not whitespace, regardless of whether they are viewing the site in portrait or landscape mode.

There are a couple of ways to implement this, including the CSS3 proposal of fluid grids. However, this is not supported across major browsers yet. You can see this working on Internet Explorer 10 (with vendor prefixes), and MSDN has examples of the CSS3 grid implementation here and here.

In the meantime, we’re going to use the tried-and-tested methods of percentage-based widths to achieve a fluid grid layout. Consider the simplistic example illustrated in Figure 7, which has the following design requirements:

  1. A #header that spans across the width of the screen.
  2. A #mainContent div that spans 60 percent of the width of the screen.
  3. A #sideContent div that spans 40 percent of the screen width.
  4. 20-px fixed spacing between #mainContent and #sideContent.
  5. A #mainImage img element that occupies all the available width inside #mainContent, excluding a fixed 10-px gutter around it.

Set Up for a Fluid Grid
Figure 7. Set Up for a Fluid Grid

The markup for this page would look like the following:

  1. <!doctype html>
  2. <html>
  3. <head>
  4.   <title>Proportional Grid page</title>
  5.   <style>
  6.     body {
  7.       /* Note the below properties for body are illustrative only.
  8.          Not needed for responsiveness */
  9.       font-size:40px;
  10.       text-align: center;
  11.       line-height: 100px;
  12.       vertical-align: middle;
  13.     }
  14.     #header
  15.     {
  16.       /* Note the below properties for body are illustrative only.
  17.          Not needed for responsiveness */
  18.       height: 150px;
  19.       border: solid 1px blue;
  20.     }
  21.     #mainContent {
  22.       width: 60%;
  23.       float: right;
  24.       /*This way the mainContent div is always 60% of the width of its parent container,
  25.         which in this case is the  tag that defaults to 100% page width anyway */
  26.       background: #999999;
  27.       }
  28. #imageContainer {
  29.     margin:10px;
  30.     width: auto;
  31.     /*This forces there to always be a fixed margin of 10px around the image */
  32. }
  33. #mainImage {
  34.     width:100%;
  35.     /* the image grows proportionally with #mainContent, but still maintains 10px of gutter */
  36. }
  37. #sideContentWrapper {
  38.     width: 40%;
  39.     float: left;
  40. }
  41. #sideContent {
  42.     margin-right: 20px;
  43.     /* sideContent always keeps 20px of right margin, relative to its parent container, namely
  44.        #sideContentWrapper. Otherwise it grows proportionally. */
  45.     background: #cccccc;
  46.     min-height: 200px;
  47.     }
  48.   </style>
  49. </head>
  50. <body>
  51.   <div id=“header”>Header</div>
  52.   <div id=“mainContent”>
  53.     <div id=“imageContainer”>
  54.       <img id=“mainImage” src=“microsoft_pc_1.png” />
  55.     </div>
  56.     Main Content
  57.   </div>
  58.   <div id=“sideContentWrapper”>
  59.   <div id=“sideContent”>
  60.     Side Content
  61.   </div>
  62.   </div>
  63. </body>
  64. </html>

A similar technique is employed by Wikipedia for its pages. You’ll notice that the content of an article seems to always fit the available screen width. Most interestingly, the sidebars (the left navigation bar as well as the right column with the HTML5 emblem) have a fixed pixel width and seem to “stick” to their respective sides of the screen. The central area with the textual content grows and shrinks in response to the screen size. Figure 8 and Figure 9 show examples. Notice the sidebars remain at a fixed width, and the available width for the remaining text content in the center gets proportionally scaled.

Wikipedia on a 1920-px wide monitor
Figure 8. Wikipedia on a 1920-px wide monitor

Wikipedia on a 800-px wide monitor
Figure 9. Wikipedia on a 800-px wide monitor

Such an effect for a site with a fixed navigation menu on the left can easily be achieved with the following code:

  1. <!DOCTYPE html>
  2. <html>
  3.   <head><title>Fixed-width left navigation</title>
  4.   <style type=“text/css”>
  5.   body
  6.   {
  7.     /* Note the below properties for body are illustrative only.
  8.        Not needed for responsiveness */
  9.     font-size:40px;
  10.     text-align: center;
  11.     line-height: 198px;
  12.     vertical-align: middle;
  13. }
  14.  #mainContent
  15.  {
  16.     margin-left: 200px;
  17.     min-height: 200px;
  18.     background: #cccccc;
  19. }
  20. #leftNavigation
  21. {
  22.     width: 180px;
  23.     margin: 0 5px;
  24.     float: left;
  25.     border: solid 1px red;
  26.     min-height: 198px;
  27. }
  28. </style>
  29. </head>
  30. <body>
  31. <div id=“leftNavigation”>Navigation</div>
  32. <div id=“mainContent”>SomeContent</div>
  33. </body>
  34. </html>

Changing the Page Layout Based on Breakpoints

Proportional scaling is only part of the solution—because we don’t want to scale down all content equally for phones and other small-screen devices. This is where we can use CSS3 media queries to progressively enhance our site experience and add additional columns as screen size grows larger. Similarly, for small screen widths, we might use media queries to hide entire blocks of low-priority content.

MediaQueri.es is a great resource to browse to see what kinds of layout changes sites undergo at their breakpoints. Consider the example of Simon Collision shown in Figure 10.

Simon Collision at Different Screen Sizes
Figure 10. Simon Collision at Different Screen Sizes

We can achieve a similar experience using CSS3 media queries. Let’s examine the simple example illustrated in Figure 11, where I have four divs: #red, #green, #yellow and #blue.

Example for CSS Media Queries
Figure 11. Example for CSS Media Queries

Here’s the sample code:

  1. <!doctype html>
  2. <html>
  3. <head>
  4. <title>Break points with media queries</title>
  5.   <style type=“text/css”>
  6.     /* default styling info*/
  7. /* four columns of stacked one below the other in a phone layout */
  8. /* remember to plan and style your sites mobile-first */
  9.  
  10. #mainContent
  11. {
  12.   margin: 40px;
  13. }
  14.  
  15. #red#yellow#green#blue
  16. {
  17.   height: 200px;
  18. }
  19. #red
  20. {
  21.   background: red;
  22. }
  23. #green
  24. {
  25.   background: green;
  26. }
  27. #yellow
  28. {
  29.   background: yellow;
  30. }
  31. #blue
  32. {
  33.   background: blue;
  34. }
  35.  
  36. @media screen and (maxwidth:800pxand (minwidth:540px)
  37. {
  38.   /* this is the breakpoint where we transition from the first layout, of four side-by-side
  39.      columns, to the square layout with 2X2 grid */
  40.  
  41.   #red, #blue, #green, #yellow {
  42.     width:50%;
  43.     display: inlineblock;
  44.   }
  45. }
  46.  
  47. @media screen and (minwidth:800px)
  48. {
  49.   /*custom styling info for smartphones small screens;
  50.     All columns are just displayed one below the other */
  51.  
  52.   #red, #yellow, #green, #blue {
  53.     width: 25%;
  54.     display: inlineblock;
  55.     white-space: nowrap;
  56.   }
  57.  
  58. }
  59.  
  60.   </style>
  61. </head>
  62. <body>
  63.   <div id=“mainContent”>
  64.     <div id=“red”></div><div id=“green”></div><div id=“yellow”></div><div id=“blue”></div>
  65.   </div>
  66. </body>
  67. </html>

Often though, you don’t need to write such stylesheets from scratch. After all, what’s Web development without taking advantage of the abundance of open-source frameworks out there and available, right? Existing grid-layout frameworks, such as Gumby Framework (which is built on top of Nathan Smith’s tried-and-true 960gs) and the Skeleton Framework, already provide out-of-box support for reordering the number of grid columns based on available screen width. Another great starting point, especially for a Wikipedia-esque layout, is the simply named CSS Grid. This provides users with the standard fixed-width left navigation bar, which disappears when the screen resolution shifts to that of tablets and smartphones, giving you a single-column layout.

More Media Queries

Depending on the needs of your site design, you might require other pieces of data about the device/viewport before making your CSS decisions. Media queries let you poll the browser for other attributes as well, such as:

And others are defined here.

Earlier, we broke down the two components of responsive layout to examine how they’re implemented. It’s crucial to remember that truly responsive layout is device agnostic—that is, not optimized for specific device widths—and is therefore a combination of the two techniques.

Images and Photos

Images are used on the Web for photo content as well as for styling (for background textures, custom borders and shadows and icons). Images make the Web beautiful, and we certainly want our sites to look rich and inviting to all users. However, the biggest concerns around images relate arguably to the most important part of the user experience—namely, performance and page load time.

Bandwidth Impact of Images

Our Web sites get served up in text—HTML, CSS and JavaScript. Often, these files don’t take more than 50 kilobytes or so to download. Images and other media are usually the most bandwidth-hungry parts of our pages. All the images on the homepage of a news site can add up to a couple of megabytes of content, which the browser must download as it renders the page. Additionally, if all the image content comes from separate files, each individual image file request causes additional network overhead. This is not a great experience for someone accessing your site on a 3G network, especially if you’re looking to serve up a gorgeous 8-megapixel panoramic landscape background. Besides, your user’s 320 x 480 px phone will not do justice to this high-quality image content. So, how do you ensure that users get a quick, responsive experience on their phones, which can then scale up to a richer media experience on larger devices?

Consider the following techniques, which you can combine to save users image downloads on the order of several hundred kilobytes, if not more, and provide a better performing experience.

Can You Replace Your Images with CSS?

CSS3 can help Web developers avoid using images altogether for a lot of common scenarios. In the past, developers have used images to achieve simple effects like text with custom fonts, drop-shadows, rounded corners, gradient backgrounds and so on.

Most modern browsers (Internet Explorer 10, Google Chrome, Mozilla Firefox and Safari) support the following CSS3 features, which developers can use to reduce the number of image downloads a user needs while accessing a site. Also, for older browsers, a number of these techniques degrade naturally (for example, the rounded border just gives way to a square border on Internet Explorer 8 and earlier), and this way your sites are still functional and usable on older browsers.

  • Custom font support using @font-face. With CSS3, you can upload custom fonts to your site (as long as you own the license to do so) and just point to them in your stylesheet. You don’t need to create images to capture your page titles and headers or embed custom fonts for impactful titles and headers
  • Background-gradients. Go to a lot of top sites, and you’ll notice that the background of the site is usually a gradient color, which helps the page look less “flat.” This can easily be achieved with CSS3, as seen here.
  • Rounded corners. CSS3 allows you to declaratively specify a border-radius for each of the four corners of an HTML element and avoid having to rely on those pesky 20 x 20 px images of circles to create a rounded box on your site design.
  • 2-D transforms. CSS3 allows you to declare 2-D transforms such as translate(), rotate(), skew() and others to change the appearance of your markup. IETestDrive has a great working example here. Common transforms such as rotation might cut back on the number of image downloads.
  • Box-shadow and text-shadow. Modern browsers support box-shadow and text-shadow, which allow site developers to make their content look more three dimensional and add prominence to important pieces of content (such as header text, images, floating menus and the like)

Some of these properties might require a browser-specific implementation (using a vendor prefix) in addition to the standard implementation. HTML5Please is a convenient resource for identifying whether you need to use additional vendor prefixing for your CSS3.

Now, to be fair, users visiting your site on older browsers will see a functional but less aesthetic version of your site content. But the trade-off here is to ensure that the ever-growing number of people who visit your sites via cutting-edge mobile devices and 3G Internet have a fast, responsive site experience.

Use JavaScript to Download the Right Image Size for the Right Context

If your site experience inherently relies on pictures, you need a solution that scales across the spectrum of devices and network conditions to offer users a compelling experience in the context of the device they use. This means that on high-quality cinema displays you want to wow your audience with high-quality (that is, large file size) images. At the same time, you don’t want to surface your 1600 x 1200 px photographs to users on a 4-inch cellphone screen with a metered 3G data connection.

While the W3C is working on proposals for how to declare different image sources for a given picture, a few emerging JavaScript technologies can help you get started right now.

Media Query Listeners

Media Query Listeners are supported in modern browsers. They let developers use JavaScript to verify whether certain media query conditions have been met, and accordingly decide what resources to download.

For example, say your Web page has a photograph that someone posted. As a developer, you need to do two things:

  • Decide the thresholds (or break points) for showing a high-quality (large-screen experience) or a small-screen experience, and based on that decision, download a high-quality set of resources or the low-bandwidth set of resources. Include the following script at load time to ensure that your page downloads the appropriate set of assets and provides users with the right experience:
  1. var mediaQueryList = window.matchMedia(“(min-width:480px)”);
  2. //NOTE: for IE10 you will have to use .msMatchMedia, the vendor-prefixed implementation
  3.  //instead
  4. isRegularScreen = mediaQueryList.matches;
  5. //this returns a Boolean which you can use to evaluate whether to use high-quality assets or
  6. //low-bandwidth assets
  7.  
  8. if (isRegularScreen)
  9. {
  10.   //run script to download the high-quality images
  11. }
  12. else
  13. {
  14.   //the condition has failed, and user is on smartphone or snap-mode
  15.   //run script to download low-bandwidth images
  16. }
  • Optionally, add an event listener to watch for changes to the media size so that as a user resizes her browser window, you can run different sets of scripts to acquire high-quality resources as needed. For example, a user might first visit your site on Windows 8 in snap mode with a 320-px width. Later, the user might find your content interesting and open the browser in full-mode (and even share what she is seeing on her HDTV.) At this point, you might want to provide a better experience for your media:
  1. mediaQueryList.addListener(mediaSizeChange);
  2. function mediaSizeChange(mediaQueryList)
  3. {
  4.   //Executed whenever the media query changes from true to false or vice versa
  5.   if(mediaQueryList.matches)
  6.   {
  7.     //run script to acquire high-quality assets;
  8.   }
  9. else{
  10.   //in this case the user has gone from a large screen to small screen
  11.   //by resizing their browser down
  12.   //if the high-quality images are already downloaded
  13.   //we could treat this as a no-op and just use the existing high-quality assets
  14.  
  15.   //alternatively, if the smaller image shows a clipped version of the high-quality image
  16.   //trigger the download of low-bandwidth images
  17.  
  18.   }
  19. }

Custom JS Libraries

Of course, there are also custom libraries to help you with this. These libraries work in a similar way by identifying the size and resolution of the user’s device and then delivering, on-the-fly, a scaled-down version of your source image over the network. Here are some examples:

  • The Filament Group, which redesigned the Boston Globe site to be responsive, has a technique available here, which requires you to add some JavaScript files to your site and alter your site’s .htaccess file. Then, for each of your <img> tags, you provide a regular-size version as well as a hi-res version, and their plug-in takes care of the rest.
  1. <img src=“smallRes.jpg” data-fullsrc=“largeRes.jpg”>
  • A similar technique is available at AdaptiveImages.com. The benefit of this technique is that it does not require developers to hand-code their markup to point to low-resolution and high-resolution images, nor does it require developers to manually upload two different versions of the same image.

·        Tyson Matanich has made publicly available the Polyfill codebase, which is the technique used by microsoft.com in its adaptive redesign detailed earlier. Tyson also sheds light on the rationale behind the available functionality in the Polyfill library in his blog post. Some of the optimizations that Tyson and his team have made for the Polyfill codebase include the following (which work across browsers, even on Internet Explorer 6):

  1. Allow developers to specify which images should load before the DOM is ready (must-have images for page content).
  2. Allow developers to specify which images to load only after the rest of the page is ready (for example, images in a slide show that will only toggle 10 seconds later).
  3. Allow developers to decide whether new images should be downloaded and swapped in at the time a browser is resized.

The blog post details all the optimizations that have been exposed to developers in the Polyfill API.

Text

Sites use text to communicate organization and content to their users in two predominant ways, namely body text and header text. It’s definitely valuable to think through how your site is going to scale these across different contexts.

Body text is particularly interesting if your site features articles, blog posts and tons of written content that users consume. Your mobile users want to read the same 500-word article on their desktop, TV and 320-px width screens and, as the site developer, you want to balance readability with convenience (that is, not having to scroll too much). The width of the article’s body can be scaled up to match the screen size, but more than that, you can offer larger type and improved line spacing to further improve readability for users with bigger screens.

Blocks of text are usually most readable when they hold approximately 66 characters per line, so if your site really depends on readability of long articles, optimizing type responsively for users can really improve their overall experience.

The following example uses the CSS3 media query max-width to progressively increase the readability of paragraph text:

  1. /* pack content more tightly on mobile screens to reduce scrolling in really long articles */
  2. p {
  3.   font-size:0.6em;
  4.   line-height: 1em;
  5.   letter-spacing: -0.05em;
  6. }
  7.  
  8. @media screen and (max-width:800px) and (min-width:400px)
  9. {
  10.   /* intermediate text density on tablet devices */
  11.   p
  12.   {
  13.     font-size:0.8em;
  14.     line-height: 1.2em;
  15.     letter-spacing: 0;
  16.   }
  17. }
  18.  
  19. @media screen and (min-width:800px)
  20. {
  21.   /* text can be spaced out a little better on a larger screen */
  22.   p
  23.   {
  24.     font:1em ‘Verdana’, ‘Arial’, sans-serif;
  25.     line-height: 1.5em;
  26.     letter-spacing:0.05em;
  27.   }
  28. }

AListApart.com has a great example of an article with responsively scaled type here.

Additionally, your site probably uses headlines to break up content—to make it easier for a user who is scanning through your site’s pages to quickly identify how information and functionality are structured. Sites often use headlines with large impactful type and add margins and padding.

Headers in HTML (specifically <h1>, <h2>, and similar tags) usually are automatically styled not just to use a large font-size attribute but also spacious margins and padding to ensure that they stand out and break the flow of content.

With a similar technique, you can consider scaling down the text size, margins, padding and other spacing attributes you use for your headlines as a function of the available device real-estate. You can also use available open-source solutions, such as FitText, to achieve this.

Optimizing Form Fields

If your site requires users to fill in forms, you might want to ensure that you can minimize interactions for touch users. This is especially relevant if you have a lot of text inputs.

HTML5 extends the type attribute for the <input> tag to let developers add semantic meaning to a textbox. For example, if a user is filling out a contact form, the phone number input can be marked up as <input type= “tel” /> and the email address field can be marked up as <input type= “email” />.

Modern browsers, especially those on touch devices, will parse this attribute and optimize the layout of the touch-screen keyboard accordingly. For example, when a user taps on a phone number field, the browser’s touch keyboard will prominently display a numpad, and when the user taps on the email address field, the touch keyboard will surface the @ key, as well as a .com key to minimize typing. This is a minor tweak that can really improve the overall form-filling experience on your site for users visiting via touchscreen phones and tablets.

10 Must-Have Visual Studio Productivity Add-Ins I use everyday and recommend to every . Net Developer

Visual Studio provides a rich extensibility model that developers at Microsoft and in the community have taken advantage of to provide a host of quality add-ins. Some add-ins contribute significant how-did-I-live-without-this functionality, while others just help you automate that small redundant task you constantly find yourself performing.
ClickHandler.ashx
10 Must-Have Add-Ins

TestDriven.NET
GhostDoc
Paster
CodeKeep
PInvoke.NET
VSWindowManager PowerToy
WSContractFirst
VSMouseBindings
CopySourceAsHTML
Cache Visualizer

In this article, I introduce you to some of the best Visual Studio add-ins available today that can be downloaded for free. I walk through using each of the add-ins, but because I am covering so many I only have room to introduce you to the basic functionality.
Each of these add-ins works with Visual Studio .NET 2003 and most of them already have versions available for Visual Studio 2005. If a Visual Studio 2005 version is not available as of the time of this writing, it should be shortly.

 

TestDriven.NET
Test-driven development is the practice of writing unit tests before you write code, and then writing the code to make those tests pass. By writing tests before you write code, you identify the exact behavior your code should exhibit and, as a bonus, at the end you have 100 percent test coverage, which makes extensive refactoring possible.
NUnit gives you the ability to write unit tests using a simple syntax and then execute those tests one by one or altogether against your app. If you are using Visual Studio Team System, you have unit testing functionality built into the Visual Studio IDE. Before the Visual Studio Team System, there was TestDriven.NET, an add-in that integrates NUnit directly into the Visual Studio IDE. Using a non-Team System version of Visual Studio 2005 or Visual Studio .NET 2003, is, in my opinion, still the best solution available.
TestDriven.NET adds unit testing functionality directly to the Visual Studio IDE. Instead of writing a unit test, switching over to the NUnit GUI tool, running the test, then switching back to the IDE to code, and so on, you can do it all right in the IDE.

 

Figure 1 New Testing Options from TestDriven.NET 
After installing TestDriven.NET you will find a number of new menu items on the right-click context menu as shown in Figure 1. You can right-click directly on a unit test and run it. The results will be displayed in the output window as shown in Figure 2.

 

Figure 2 Output of a Unit Test 
While executing unit tests in the IDE is invaluable by itself, perhaps the best feature is that you can also quickly launch into the debugger by right-clicking on a test and selecting Test With | Debugger. This will launch the debugger and then execute your unit tests, hitting any breakpoints you have set in those tests.
In fact, it doesn’t even have to be a unit test for TestDriven.NET to execute it. You could just as easily test any public method that returns void. This means that if you are testing an old app and need to walk through some code, you can write a quick test and execute it right away.
TestDriven.NET is an essential add-in if you work with unit tests or practice test-driven development. (If you don’t already, you should seriously consider it.) TestDriven.NET was written by Jamie Cansdale and can be downloaded from http://www.testdriven.net.

 

GhostDoc
XML comments are invaluable tools when documenting your application. Using XML comments, you can mark up your code and then, using a tool like nDoc, you can generate help files or MSDN-like Web documentation based on those comments. The only problem with XML documentation is the time it takes to write it you often end up writing similar statements over and over again. The goal of GhostDoc is to automate the tedious parts of writing XML comments by looking at the name of your class or method, as well as any parameters, and making an educated guess as to how the documentation should appear based on recommended naming conventions. This is not a replacement for writing thorough documentation of your business rules and providing examples, but it will automate the mindless part of your documentation generation.
For instance consider the method shown here:

private void SavePerson(Person person) { }

After installing GhostDoc, you can right-click on the method declaration and choose Document this. The following comments will then be added to your document:

///  /// Saves the person. ///  ///Person. private void SavePerson(Person person) { }

As you can see, GhostDoc has automatically generated a summary based on how the method was named and has also populated the parameter comments. Don’t stop hereyou should add additional comments stating where the person is being saved to or perhaps give an example of creating and saving a person. Here is my comment after adding some additional information by hand:

///  /// Saves a person using the configured persistence provider. ///  ///The Person to be saved private void SavePerson(Person person) { }
Adding these extra comments is much easier since the basic, redundant portion is automatically generated by GhostDoc. GhostDoc also includes options that allow you to modify existing rules and add additional rules that determine what kind of comments should be generated.
GhostDoc was written by Roland Weigelt and can be downloaded from http://www.roland-weigelt.de/ghostdoc.

 

Smart Paster
Strings play a large role in most applications, whether they are comments being used to describe the behavior of the system, messages being sent to the user, or SQL statements that will be executed. One of the frustrating parts of working with strings is that they never seem to paste correctly into the IDE. When you are pasting comments, the strings might be too long or not aligned correctly, leaving you to spend time inserting line breaks, comment characters, and tabbing. When working with strings that will actually be concatenated, you have to do even more work, usually separating the parts of the string and inserting concatenation symbols or using a string builder.
The Smart Paster add-in helps to limit some of this by providing a number of commands on the right-click menu that let you paste a string from the clipboard into Visual Studio using a certain format. After installing Smart Paster, you will see the new paste options available on the right-click menu (see Figure 3).

 

Figure 3 String Pasting Options from Smart Paster 
For instance, you might have the following string detailing some of your business logic:

To update a person record, a user must be a member of the customer service group or the manager group. After the person has been updated, a letter needs to be generated to notify the customer of the information change.

You can copy and paste this into Visual Studio using the Paste As | Comment option, and you would get the following:

//To update a person record a user must be a member of the customer //service group or the manager group. After the person has been updated //a letter needs to be generated to notify the customer of the //information change.
The correct comment characters and carriage returns are automatically inserted (you can configure at what length to insert a character return). If you were inserting this text without the help of Smart Paster, it would paste as one long line, forcing you to manually add all the line breaks and comment characters. As another example, let’s say you have the following error message that you need to insert values into at run time:

You do not have the correct permissions to perform <insert action>. You must be a member of the <insert group> to perform this action.

Using the Paste As | StringBuilder command, you can insert this string as a StringBuilder into Visual Studio. The results would look like this:

StringBuilder stringBuilder = new StringBuilder(134); stringBuilder.AppendFormat( @"You do not have the correct permissions to "); stringBuilder.AppendFormat( @"perform . You must be a member of "); stringBuilder.AppendFormat( @"the  to perform this action.");

It would then simply be a matter of modifying the code to replace the variables sections of the string:

StringBuilder stringBuilder = new StringBuilder(134); stringBuilder.AppendFormat( @"You do not have the correct permissions to "); stringBuilder.AppendFormat( @"perform {0}. You must be a member of ", action); stringBuilder.AppendFormat( @"the {0} to perform this action.", group);

Smart Paster is a time-saving add-in that eliminates a lot of the busy work associated with working with strings in Visual Studio. It was written by Alex Papadimoulis.

 

CodeKeep
Throughout the process of software development, it is common to reuse small snippets of code. Perhaps you reuse an example of how to get an enum value from a string or a starting point on how to implement a certain pattern in your language of choice.
Visual Studio offers some built-in functionality for working with code snippets, but it assumes a couple of things. First, it assumes that you are going to store all of your snippets on your local machine, so if you switch machines or move jobs you have to remember to pack up your snippets and take them with you. Second, these snippets can only be viewed by you. There is no built-in mechanism for sharing snippets between users, groups, or the general public.
This is where CodeKeep comes to the rescue. CodeKeep is a Web application that provides a place for people to create and share snippets of code in any language. The true usefulness of CodeKeep is its Visual Studio add-in, which allows you to search quickly through the CodeKeep database, as well as submit your own snippets.
After installing CodeKeep, you can search the existing code snippets by selecting Tools | CodeKeep | Search, and then using the search screen shown in Figure 4.

 

Figure 4 Searching Code Snippets with CodeKeep 
From this screen you can view your own snippets or search all of the snippets that have been submitted to CodeKeep. When searching for snippets, you see all of the snippets that other users have submitted and marked as public (you can also mark code as private if you want to hide your bad practices). If you find the snippet you are looking for, you can view its details and then quickly copy it to the clipboard to insert into your code.
You can also quickly and easily add your own code snippets to CodeKeep by selecting the code you want to save, right-clicking, and then selecting Send to CodeKeep.This will open a new screen that allows you to wrap some metadata around your snippet, including comments, what language it is written in, and whether it should be private or public for all to see.
Whenever you write a piece of code and you can imagine needing to use it in the future, simply take a moment to submit it; this way, you won’t have to worry about managing your snippets or rewriting them in the future. Since CodeKeep stores all of your snippets on the server, they are centralized so you don’t have to worry about moving your code from system to system or job to job.
CodeKeep was written by Arcware’s Dave Donaldson and is available from http://www.codekeep.net.

 

PInvoke.NET
API calls within the .NET Framework. One of the hard parts of using P/Invoke is determining the method signature you need to use; this can often be an exercise in trial and error. Sending incorrect data types or values to an unmanaged API can often result in memory leaks or other unexpected results.
PInvoke.NET is a wiki that can be used to document the correct P/Invoke signatures to be used when calling unmanaged Win32 APIs. A wiki is a collaborative Web site that anyone can edit, which means there are thousands of signatures, examples, and notes about using P/Invoke. Since the wiki can be edited by anyone, you can contribute as well as make use of the information there.
While the wiki and the information stored there are extremely valuable, what makes them most valuable is the PInvoke.NET Visual Studio add-in. Once you have downloaded and installed the PInvoke.NET add-in, you will be able to search for signatures as well as submit new content from inside Visual Studio. Simply right-click on your code file and you will see two new context items: Insert PInvoke Signatures and Contribute PInvoke Signatures and Types.

Figure 5 Using PInvoke.NET 
When you choose Insert PInvoke Signatures, you’ll see the dialog box shown in Figure 5. Using this simple dialog box, you can search for the function you want to call. Optionally, you can include the module that this function is a part of. Now, a crucial part of all major applications is the ability to make the computer Beep. So I will search for the Beep function and see what shows up. The results can be seen in Figure 6.

Figure 6 Finding the Beep Function in PInvoke.NET 
.NET. The wiki suggests alternative managed APIs, letting you know that there is a new method System.Console.Beep in the .NET Framework 2.0.
There is also a link at the bottom of the dialog box that will take you to the corresponding page on the wiki for the Beep method. In this case, that page includes documentation on the various parameters that can be used with this method as well as some code examples on how to use it.
After selecting the signature you want to insert, click the Insert button and it will be placed into your code document. In this example, the following code would be automatically created for you:

[DllImport("kernel32.dll", SetLastError=true)] [return: MarshalAs(UnmanagedType.Bool)] static extern bool Beep( uint dwFreq, uint dwDuration);

You then simply need to write a call to this method and your computer will be beeping in no time.

The PInvoke.NET wiki and Visual Studio add-in take away a lot of the pain and research time sometimes involved when working with the Win32 API from managed code. The wiki can be accessed at http://www.pinvoke.net, and the add-in can be downloaded from the Helpful Tools link found in the bottom-left corner of the site.

 

VSWindowManager PowerToy
The Visual Studio IDE includes a huge number of different Windows, all of which are useful at different times. If you are like me, you have different window layouts that you like to use at various points in your dev work. When I am writing HTML, I like to hide the toolbox and the task list window. When I am designing forms, I want to display the toolbox and the task list. When I am writing code, I like to hide all the windows except for the task list. Having to constantly open, close, and move windows based on what I am doing can be both frustrating and time consuming.
Visual Studio includes the concept of window layouts. You may have noticed that when you start debugging, the windows will automatically go back to the layout they were in the last time you were debugging. This is because Visual Studio includes a normal and a debugging window layout.
Wouldn’t it be nice if there were additional layouts you could use for when you are coding versus designing? Well, that is exactly what VSWindowManager PowerToy does.
After installing VSWindowManager PowerToy, you will see some new options in the Window menu as shown in Figure 7.

 

Figure 7 VSWindowManager Layout Commands 
The Save Window Layout As menu provides commands that let you save the current layout of your windows. To start using this power toy, set up your windows the way you like them for design and then navigate to the Windows | Save Window Layout As | My Design Layout command. This will save the current layout. Do the same for your favorite coding layout (selecting My Coding Layout), and then for up to three different custom layouts.
VSWindowManager will automatically switch between the design and coding layouts depending on whether you are currently viewing a designer or a code file. You can also use the commands on the Apply Window Layout menu to choose from your currently saved layouts. When you select one of the layouts you have saved, VSWindowManager will automatically hide, show, and rearrange windows so they are in the exact same layout as before.
VSWindowManager PowerToy is very simple, but can save you a lot of time and frustration. VSWindowManager is available from vswindowmanager.codeplex.com/.

 

WSContractFirst
Visual Studio makes creating Web services deceptively easy You simply create an .asmx file, add some code, and you are ready to go. ASP.NET can then create a Web Services Description Language (WSDL) file used to describe behavior and message patterns for your Web service.
There are a couple problems with letting ASP.NET generate this file for you. The main issue is that you are no longer in control of the contract you are creating for your Web service. This is where contract-first development comes to the rescue. Contract-first development, also called contract-driven development, is the practice of writing the contract (the WSDL file) for your Web service before you actually write the Web service itself. By writing your own WSDL file, you have complete control over how your Web service will be seen and used, including the interface and data structures that are exposed.
Writing a WSDL document is not a lot of fun. It’s kind of like writing a legal contract, but using lots of XML. This is where the WSContractFirst add-in comes into play. WSContractFirst makes it easier to write your WSDL file, and will generate client-side and server-side code for you, based on that WSDL file. You get the best of both worlds: control over your contract and the rapid development you are used to from Visual Studio style services development.
The first step to using WSContractFirst is to create your XML schema files. These files will define the message or messages that will be used with your Web services. Visual Studio provides an easy-to-use GUI interface to define your schemasthis is particularly helpful since this is one of the key steps of the Web service development process. Once you have defined your schemas you simply need to right-click on one of them and choose Create WSDL Interface Description. This will launch the Generate WSDL Wizard, the first step of which is shown in Figure 8.

Figure 8 Building a WSDL File with WSContractFirst  
Step 1 collects the basics about your service including its name, namespace, and documentation. Step 2 allows you to specify the .xsd files you want to include in your service. The schema you selected to launch this wizard is included by default. Step 3 allows you to specify the operations of your service. You can name the operation as well as specify whether it is a one-way or request/response operation. Step 4 gives you the opportunity to enter the details for the operations and messages. Step 5 allows you to specify whether a element should be created and whether or not to launch the code generation dialog automatically when this wizard is done. Step 6 lets you specify alternative .xsd paths. Once the wizard is complete, your new WSDL file is added to your project.
Now that you have your WSDL file there are a couple more things WSContractFirst, can do for you. To launch the code generation portion of WSContractFirst, you simply need to right-click on your WSDL file and select Generate Web Service Code. This will launch the code generation dialog box shown in Figure 9.

Figure 9 WSContractFirst Code Generation Options 
You can choose to generate a client-side proxy or a service-side stub, as well as configure some other options about the code and what features it should include. Using these code generation features helps speed up development tremendously.
If you are developing Web services using Visual Studio you should definitely look into WSContractFirst and contract-first development. WSContractFirst was written by Thinktecture’s Christian Weyer.

 

VSMouseBindings
Your mouse probably has five buttons, so why are you only using three of them? The VSMouseBindings power toy provides an easy to use interface that lets you assign each of your mouse buttons to a Visual Studio command.
VSMouseBindings makes extensive use of the command pattern. You can bind mouse buttoms to various commands, such as open a new file, copy the selected text to the clipboard, or just about anything else you can do in Visual Studio. After installing VSMouseBindings you will see a new section in the Options dialog box called VsMouseBindings. The interface can be seen in Figure 10.

Figure 10 VSMouseBindings Options for Visual Studio 
As you can see, you can select a command for each of the main buttons. You probably shouldn’t mess around with the left and right mouse buttons, though, as their normal functionality is pretty useful. The back and forward buttons, however, are begging to be assigned to different commands. If you enjoy having functionality similar to a browser’s back and forward buttons, then you can set the buttons to the Navigate.Backward and Navigate.Forward commands inVisual Studio.
The Use this mouse shortcut in menu lets you set the scope of your settings. This means you can configure different settings when you are in the HTML designer as opposed to when you are working in the source editor.
VSMouseBindings is available from archive.msdn.microsoft.com/VSMouseBindings.

 

CopySourceAsHTML
Code is exponentially more readable when certain parts of that code are differentiated from the rest by using a different color text. Reading code in Visual Studio is generally much easier than trying to read code in an editor like Notepad.
Chances are you may have your own blog by now, or at least have spent some time reading them. Normally, when you try to post a cool code snippet to your blog it ends up being plain old text, which isn’t the easiest thing to read. This is where the CopySourceAsHTML add-in comes in to play. This add-in allows you to copy code as HTML, meaning you can easily post it to your blog or Web site and retain the coloring applied through Visual Studio.
After installing the CopySourceAsHTML add-in, simply select the code you want to copy and then select the Copy Source as HTML command from the right-click menu. After selecting this option you will see the dialog box shown in Figure 11.

Figure 11 Options for CopySourceAsHTML 

From here you can choose what kind of HTML view you want to create. This can include line numbers, specific tab sizes, and many other settings. After clicking OK, the HTML is saved to the clipboard. For instance, suppose you were starting with the following code snippet inside Visual Studio:

private long Add(int d, int d2) { return (long) d + d2; }
Figure 12 HTML Formatted Code  
After you select Copy As HTML and configure the HTML to include line numbers, this code will look like Figure 12 in the browser. Anything that makes it easier to share and understand code benefits all of us as it means more people will go to the trouble of sharing knowledge and learning from each other.
CopySourceAsHTML was written by Colin Coller and can be downloaded from copysourceashtml.codeplex.com/.

 

Cache Visualizer
Visual Studio 2005 includes a new debugging feature called visualizers, which can be used to create a human-readable view of data for use during the debugging process. Visual Studio 2005 includes a number of debugger visualizers by default, most notably the DataSet visualizer, which provides a tabular interface to view and edit the data inside a DataSet. While the default visualizers are very valuable, perhaps the best part of this new interface is that it is completely extensible. With just a little bit of work you can write your own visualizers to make debugging that much easier.
While a lot of users will write visualizers for their own custom complex types, some developers are already posting visualizers for parts of the Framework. I am going to look at one of the community-built visualizers that is already available and how it can be used to make debugging much easier.
The ASP.NET Cache represents a collection of objects that are being stored for later use. Each object has some settings wrapped around it, such as how long it will be cached for or any cache dependencies. There is no easy way while debugging to get an idea of what is in the cache, how long it will be there, or what it is watching. Brett Johnson saw that gap and wrote Cache Visualizer to examine the ASP.NET cache.
Once you have downloaded and installed the visualizer you will see a new icon appear next to the cache object in your debug windows, as shown in Figure 13.

Figure 13 Selecting Cache Visualizer While Debugging 
When you click on the magnifying glass to use the Cache Visualizer a dialog box appears that includes information about all of the objects currently stored in the ASP. NET cache, as you can see in Figure 14.

Figure 14 Cache Visualizer Shows Objects in the ASP.NET Cache 
Under Public Cache Entries, you can see the entries that I have added to the cache. The entries under Private Cache Entries are ones added by ASP.NET. Note that you can see the expiration information as well as the file dependency for the cache entry.
The Cache Visualizer is a great tool when you are working with ASP.NET. It is also representative of some of the great community-developed visualizers we will see in the future.

 

Wrapping It Up
While this article has been dedicated to freely available add-ins, there are also a host of add-ins that can be purchased for a reasonable price. I encourage you to check out these other options, as in some cases they can add some tremendous functionality to the IDE. This article has been a quick tour of some of the best freely available add-ins for Visual Studio. Each of these add-ins may only do a small thing, but together they help to increase your productivity and enable you to write better code.

Latest Enterprise Library Patterns & Practices Resources

Microsoft Enterprise Library is a collection of reusable software components (application blocks) addressing common cross-cutting concerns. Each application block is now hosted in its own repository. This site serves as a hub for the entire Enterprise Library. The latest source code here only include the sample application, which utilizes all of the application blocks.

Enterprise Library Conceptual Architecture

Enterprise Library is actively developed by the patterns & practices team and in collaboration with the community. Together we are dedicated to building application blocks which help accelerate developer’s productivity on Microsoft platforms.

 LATEST
 CHANGES

OFFICIAL
RELEASES

  LEARN

GET SUPPORT

 CONTRIBUTE 

ROADMAP

  • See individual blocks’ product backlogs

Design Pattern Automation

Software development projects are becoming bigger and more complex every day. The more complex a project the more likely the cost of developing and maintaining the software will far outweigh the hardware cost.

There’s a super-linear relationship between the size of software and the cost of developing and maintaining it. After all, large and complex software requires good engineers to develop and maintain it and good engineers are hard to come by and expensive to keep around.

 

Despite the high total cost of ownership per line of code, a lot of boilerplate code still is written, much of which could be avoided with smarter compilers. Indeed, most boilerplate code stems from repetitive implementation of design patterns. But some of these design patterns are so well-understood they could be implemented automatically if we could teach it to compilers.

Implementing the Observer pattern

Take, for instance, the Observer pattern. This design pattern was identified as early as 1995 and became the base of the successful Model-View-Controller architecture. Elements of this pattern were implemented in the first versions of Java (1995, Observable interface) and .NET (2001, INotifyPropertyChanged interface). Although the interfaces are a part of the framework, they still need to be implemented manually by developers.

The INotifyPropertyChanged interface simply contains one event named PropertyChanged, which needs to be signaled whenever a property of the object is set to a different value.

Let’s have a look at a simple example in .NET:

public Person : INotifyPropertyChanged
{

string firstName, lastName;
public event NotifyPropertyChangedEventHandler PropertyChanged;
protected void OnPropertyChanged(string propertyName)
{
if ( this.PropertyChanged != null ) {
this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}
public string FirstName
{
get { return this.firstName; }
set
{
this.firstName = value;
this.OnPropertyChanged(“FirstName”);
this.OnPropertyChanged(“FullName”);
}
public string LastName
{
get { return this.lastName; }
set {
this.lastName = value;
this.OnPropertyChanged(“LastName”);
this.OnPropertyChanged(“FullName”);
}
public string FullName { get { return string.Format( “{0} {1}“, this.firstName, this.lastName); }}}

Properties eventually depend on a set of fields, and we have to raise the PropertyChanged for a property whenever we change a field that affects it.

Shouldn’t it be possible for the compiler to do this work automatically for us? The long answer is detecting dependencies between fields and properties is a daunting task if we consider all corner cases that can happen: properties can have dependencies on fields of other objects, they can call other methods, or even worse, they can call virtual methods or delegates unknown to the compiler. So, there is no general solution to this problem, at least if we expect compilation times in seconds or minutes and not hours or days. However, in real life, a large share of properties is simple enough to be fully understood by a compiler. So the short answer is, yes, a compiler could generate notification code for more than 90% of all properties in a typical application.

In practice, the same class could be implemented as follows:

[NotifyPropertyChanged]
public Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName { get { return string.Format( “{0} {1}“, this.FirstName, this.LastName); }}
}

This code tells the compiler what to do (implement INotifyPropertyChanged) and not how to do it.

Boilerplate Code is an Anti-Pattern

The Observer (INotifyPropertyChanged) pattern is just one example of pattern that usually causes a lot of boilerplate code in large applications. But a typical source base is full of patterns generating a lot of boilerplate. Even if they are not always recognized as “official” design patterns, they are patterns because they are massively repeating among a code base. The most common causes of code repetition are:

  • Tracing, logging
  • Precondition and invariant checking
  • Authorization and audit
  • Locking and thread dispatching
  • Caching
  • Change tracking (for undo/redo)
  • Transaction handling
  • Exception handling

These features are difficult to encapsulate using normal OO techniques and hence why they’re often implemented using boilerplate code. Is that such a bad thing?

Yes.

Addressing cross-cutting concerns using boilerplate code leads to violation of fundamental principles of good software engineering

  • The Single Responsibility Principle is violated when multiple concerns are being implemented in the same method, such as Validation, Security, INotifyPropertyChanged, and Undo/Redo in a single property setter.
  • The Open/Closed Principle, which states that software entities should be open for extension, but closed for modification, is best respected when new features can be added without modifying the original source code.
  • The Don’t Repeat Yourself principle abhors code repetition coming out of manual implementation of design patterns.
  • The Loose Coupling principle is infringed when a pattern is implemented manually because it is difficult to alter the implementation of this pattern. Note that coupling can occur not only between two components, but also between a component and a conceptual design. Trading a library for another is usually easy if they share the same conceptual design, but adopting a different design requires many more modifications of source code.

Additionally, boilerplate renders your code:

  • Harder to read and reason with when trying to understand what it’s doing to address the functional requirement. This added layer of complexity has a huge bearing on the cost of maintenance considering software maintenance consists of reading code 75% of the time!
  • Larger, which means not only lower productivity, but also higher cost of developing and maintaining the software, not counting a higher risk of introducing bugs.
  • Difficult to refactor and change. Changing a boilerplate (fixing a bug perhaps) requires changing all the places where the boilerplate code had been applied. How do you even accurately identify where the boilerplate is used throughout your codebase which potentially span across many solutions and/or repositories? Find-and-replace…?

If left unchecked, boilerplate code has the nasty habit of growing around your code like vine, taking over more space each time it is applied to a new method until eventually you end up with a large codebase almost entirely covered by boilerplate code. In one of my previous teams, a simple data access layer class had over a thousand lines of code where 90% was boilerplate code to handle different types of SQL exceptions and retries.

I hope by now you see why using boilerplate code is a terrible way to implement patterns. It is actually an anti-pattern to be avoided because it leads to unnecessary complexity, bugs, expensive maintenance, loss of productivity and ultimately, higher software cost.

Design Pattern Automation and Compiler Extensions

In so many cases the struggle with making common boilerplate code reusable stems from the lack of native meta-programming support in mainstream statically typed languages such as C# and Java.

The compiler is in possession of an awful lot of information about our code normally outside our reach. Wouldn’t it be nice if we could benefit from this information and write compiler extensions to help with our design patterns?

A smarter compiler would allow for:

  1. Build-time program transformation: to allow us to add features whilst preserving the code semantics and keeping the complexity and number of lines of code in check, so we can automatically implement parts of a design pattern that can be automated;
  2. Static code validation: for build-time safety to ensure we have used the design pattern correctly or to check parts of a pattern that cannot be automated have been implemented according to a set of predefined rules.

Example: ‘using’ and ‘lock’ keywords in C#

If you want proof design patterns can be supported directly by the compiler, there is no need to look further than the using and lock keywords. At first sight, they are purely redundant in the language. But the designers of the language have recognized their importance and have created a specific keyword for them.

Let’s have a look at the using keyword. The keyword is actually a part of the larger Disposable Pattern, composed of the following participants:

  • Resources Objects are objects consuming any external resource, such as a database connection.
  • Resource Consumers are instruction block or objects that consume Resource Objects during a given lifetime.

The Disposable Pattern is ruled by the following principles:

  1. Resource Objects must implement IDisposable.
  2. Implementation of IDisposable.Dispose must be idempotent, i.e. may be safely called several times.
  3. Resource Objects must have a finalizer (called destructor in C++).
  4. Implementation of IDisposable.Dispose must call GC.SuppressFinalize.
  5. Generally, objects that store Resource Objects into their state (field) are also Resource Objects, and children Resource Objects should be disposed by the parent.
  6. Instruction blocks that allocate and consume a Resource Object should be enclosed with the using keyword (unless the reference to the resource is stored in the object state, see previous point).

As you can see, the Disposable Pattern is richer than it appears at first sight. How is this pattern being automated and enforced?

  • The core .NET library provides the IDisposable interface.
  • The C# compiler provides the using keyword, which automates generation of some source code (a try/finally block).
  • FxCop can enforce a rule that says any disposable class also implements a finalizer, and the Dispose method calls GC.SuppressFinalize.

Therefore, the Disposable Pattern is a perfect example of a design pattern directly supported by the .NET platform.

But what about patterns not intrinsically supported? They can be implemented using a combination of class libraries and compiler extensions. Our next example also comes from Microsoft.

Example: Code Contracts

Checking preconditions (and optionally postconditions and invariants) has long been recognized as a best practice to prevent defects in one component causing symptoms in another component. The idea is:

  • every component (every class, typically) should be designed as a “cell”;
  • every cell is responsible for its own health therefore;
  • every cell should check any input it receives from other cells.

Precondition checking can be considered a design pattern because it is a repeatable solution to a recurring problem.

Microsoft Code Contracts (http://msdn.microsoft.com/en-us/devlabs/dd491992.aspx) is a perfect example of design pattern automation. Based on plain-old C# or Visual Basic, it gives you an API for expressing validation rules in the form of pre-conditions, post-conditions, and object invariants. However, this API is not just a class library. It translates into build-time transformation and validation of your program.

I won’t delve into too much detail on Code Contracts; simply put, it allows you to specify validation rules in code which can be checked at build time as well as at run time. For example:

public Book GetBookById(Guid id)
{
Contract.Requires(id != Guid.Empty);
return Dal.Get<Book>(id);
}

public Author GetAuthorById(Guid id)
{
Contract.Requires(id != Guid.Empty);

return Dal.Get<Author>(id);
}

Its binary rewriter can (based on your configurations) rewrite your built assembly and inject additional code to validate the various conditions that you have specified. If you inspect the transformed code generated by the binary rewriter you will see something along the lines of:

  public Book GetBookById(Guid id)
{
if (__ContractsRuntime.insideContractEvaluation <= 4)
{
try
{
++__ContractsRuntime.insideContractEvaluation;
__ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
Guid.Empty"
);
}
finally
{
--__ContractsRuntime.insideContractEvaluation;
}
}
return Dal.Get<Program.Book>(id);
}
public Author GetAuthorById(Guid id)<
{
if (__ContractsRuntime.insideContractEvaluation <= 4)
{
try
{
++__ContractsRuntime.insideContractEvaluation;
__ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
Guid.Empty"
);
}
finally
{
--__ContractsRuntime.insideContractEvaluation;
}
}
return Dal.Get<Program.Author>(id);
}

For more information on Microsoft Code Contracts, please read Jon Skeet’s excellent InfoQ article here (http://www.infoq.com/articles/code-contracts-csharp).

Whilst compiler extensions such as Code Contracts are great, officially supported extensions usually take years to develop, mature, and stabilize. There are so many different domains, each with its own set of problems, it’s impossible for official extensions to cover them all.

What we need is a generic framework to help automate and enforce design patterns in a disciplined way so we are able to tackle domain-specific problems effectively ourselves.

Generic Framework to Automate and Enforce Design Patterns

It may be tempting to see dynamic languages, open compilers (such as Roslyn), or re-compilers (such as Cecil) as solutions because they expose the very details of abstract syntax tree. However, these technologies operate at an excessive level of abstraction, making it very complex to implement any transformation but the simplest ones.

What we need is a high-level framework for compiler extension, based on the following principles:

1. Provide a set of transformation primitives, for instance:

  • intercepting method calls;
  • executing code before and after method execution;
  • intercepting access to fields, properties, or events;
  • introducing interfaces, methods, properties, or events to an existing class.

2. Provide a way to express where primitives should be applied: it’s good to tell the complier extension you want to intercept some methods, but it’s even better if we know which methods should be intercepted!

3. Primitives must be safely composable

It’s natural to want to be able to apply multiple transformations to the same location(s) in our code, so the framework should give us the ability to compose transformations.

When you’re able to apply multiple transformations simultaneously some transformations might need to occur in a specific order in relation to others. Therefore the ordering of transformations needs to follow a well-defined convention but still allow us to override the default ordering where appropriate.

4. Semantics of enhanced code should not be affected

The transformation mechanism should be unobtrusive and leave the original code unaltered as much as possible whilst at the same time providing capabilities to validate the transformations statically. The framework should not make it too easy to “break” the intent of the source code.

5. Advanced reflection and validation abilities

By definition, a design pattern contains rules defining how it should be implemented. For instance, a locking design pattern may define instance fields can only be accessed from instance methods of the same object. The framework must offer a mechanism to query methods accessing a given field, and a way to emit clean build-time errors.

Aspect-Oriented Programming

Aspect-Oriented Programming (AOP) is a programming paradigm that aims to increase modularity by allowing the separation of concerns.

An aspect is a special kind of class containing code transformations (called advices), code matching rules (barbarically called pointcuts), and code validation rules. Design patterns are typically implemented by one or several aspects. There are several ways to apply aspects to code, which greatly depend on each AOP framework. Custom attributes (annotations in Java) are a convenient way to add aspects to hand-picked elements of code More complex pointcuts can be expressed declaratively using XML (e.g. Microsoft Policy Injection Application Block) or a Domain-Specific Language (e.g. AspectJ or Spring), or programmatically using reflection (e.g. LINQ over System.Reflection with PostSharp).

The weaving process combines advice with the original source code at the specified locations (not less barbarically called joinpoints). It has access to meta-data about the original source code so, for compiled languages such as C# or Java, there is opportunity for the static weaver to perform static analysis to ensure the validity of the advice in relation to the pointcuts where they are applied.

Although aspect-oriented programming and design patterns have been independently conceptualized, AOP is an excellent solution to those who seek to automate design patterns or enforce design rules. Unlike low-level metaprogramming, AOP has been designed according to the principles cited above so anyone, and not only compiler specialists, can implement design patterns.

AOP is a programming paradigm and not a technology. As such, it can be implemented using different approaches. AspectJ, the leading AOP framework for Java, is now implemented directly in the Eclipse Java compiler. In .NET, where compilers are not open-source, AOP is best implemented as a re-compiler, transforming the output of the C# or Visual Basic compiler. The leading tool in .NET is PostSharp (see below). Alternatively, a limited subset of AOP can be achieved using dynamic proxies and service containers, and most dependency injection frameworks are able to offer at least method interception aspects.

Example: Custom Design Patterns with PostSharp

PostSharp is a development tool for the automation and enforcement of design patterns in Microsoft .NET and features the most complete AOP framework for .NET.

To avoid turning this article into a PostSharp tutorial, let’s take a very simple pattern: dispatching of method execution back and forth between a foreground (UI) thread and a background thread. This pattern can be implemented using two simple aspects: one that dispatches a method to the background thread, and another that dispatches it to the foreground thread. Both aspects can be compiled by the free PostSharp Express. Let’s look at the first aspect: BackgroundThreadAttribute.

The generative part of the pattern is simple: we just need to create a Task that executes that method, and schedule execution of that Task.

[Serializable] 
public sealed class BackgroundThreadAttribute : MethodInterceptionAspect
{
    public override void OnInvoke(MethodInterceptionArgs args)
{
Task.Run( args.Proceed );
}
}

The MethodInterceptionArgs class contains information about the context in which the method is invoked, such as the arguments and the return value. With this information, you will be able to invoke the original method, cache its return value, log its input arguments, or just about anything that’s required for your use case.

For the validation part of the pattern, we would like to avoid having the custom attribute applied to methods that have a return value or a parameter passed by reference. If this happens, we would like to emit a build-time error. Therefore, we have to implement the CompileTimeValidate method in our BackgroundThreadAttribute class:

// Check that the method returns 'void', has no out/ref argument.
public override bool CompileTimeValidate( MethodBase method ) {

MethodInfo methodInfo = (MethodInfo) method;

if ( methodInfo.ReturnType != typeof(void) ||
methodInfo.GetParameters().Any( p => p.ParameterType.IsByRef ) )
{
ThreadingMessageSource.Instance.Write( method, SeverityType.Error, "THR006",
method.DeclaringType.Name, method.Name );
return false;
}

return true;
}

The ForegoundThreadAttribute would look similar, using the Dispatcher object in WPF or the BeginInvoke method in WinForms.

The above aspect can be applied just like any other attributes, for example:

[BackgroundThread]
private static void ReadFile(string fileName)
{
DisplayText( File.ReadAll(fileName) );
}
[ForegroundThread] private void DisplayText( string content ) { this.textBox.Text = content;
}

The resulting source code is much cleaner than what we would get by directly using tasks and dispatchers.

One may argue that C# 5.0 addresses the issue better with the async and await keywords. This is correct, and is a good example of the C# team identifying a recurring problem that they decided to address with a design pattern implemented directly in the compiler and in core class libraries. While the .NET developer community had to wait until 2012 for this solution, PostSharp offered one as early as 2006.

How long must the .NET community wait for solutions to other common design patterns, for instance INotifyPropertyChanged? And what about design patterns that are specific to your company’s application framework?

Smarter compilers would allow you to implement your own design patterns, so you would not have to rely on the compiler vendor to improve the productivity of your team.

Downsides of AOP

I hope by now you are convinced that AOP is a viable solution to automate design patterns and enforce good design, but it’s worth bearing in mind that there are several downsides too:

1. Lack of staff preparation

As a paradigm, AOP is not taught in undergraduate programs, and it’s rarely touched at master level. This lack of education has contributed towards a lack of general awareness about AOP amongst the developer community.

Despite being 20 years old, AOP is misperceived as a ‘new’ paradigm which often proves to be the stumbling block for adoption for all but the most adventurous development teams.

Design patterns are almost the same age, but the idea that design patterns can be automated and validated is recent. We cited some meaningful precedencies in this article involving the C# compiler, the .NET class library, and Visual Studio Code Analysis (FxCop), but these precedencies have not been generalized into a general call for design pattern automation.

2. Surprise factor

Because staffs and students alike are not well prepared, there can be an element of surprise when they encounter AOP because the application has additional behaviors that are not directly visible from source code. Note: what is surprising is the intended effect of AOP, that the compiler is doing more than usual, and not any side effect.

There can also be some surprise of an unintended effect, when a bug in the use of an aspect (or in a pointcut) causes the transformation to be applied to unexpected classes and methods. Debugging such errors can be subtle, especially if the developer is not aware that aspects are being applied to the project.

These surprise factors can be addressed by:

  • IDE integration, which helps to visualize (a) which additional features have been applied to the source displayed in the editor and (b) to which elements of code a given aspect has been applied. At time of writing only two AOP frameworks provide correct IDE integration: AspectJ (with the AJDT plug-in for Eclipse) and PostSharp (for Visual Studio).
  • Unit testing by the developer – aspects, as well as the fact that aspects have been applied properly, must be unit tested as any other source code artifact.
  • Not relying on naming conventions when applying aspects to code, but instead relying on structural properties of the code such as type inheritance or custom attributes. Note that this debate is not unique to AOP: convention-based programming has been recently gaining momentum, although it is also subject to surprises.

3. Politics

Use of design pattern automation is generally a politically sensitive issue because it also addresses separation of concerns within a team. Typically, senior developers will select design patterns and implement aspects, and junior developers will use them. Senior developers will write validation rules to ensure hand-written code respects the architecture. The fact that junior developers don’t need to understand the whole code base is actually the intended effect.

This argument is typically delicate to tackle because it takes the point of view of a senior manager, and may injure the pride of junior developers.

Ready-Made Design Pattern Implementation with PostSharp Pattern Libraries

As we’ve seen with the Disposable Pattern, even seemingly simple design patterns can actually require complex code transformation or validation. Some of these transformations and validations are complex but still possible to implement automatically. Others can be too complex for automatic processing and must be done manually.

Fortunately, there are also simple design patterns that can be automated easily by anyone (exception handling, transaction handling, and security) with an AOP framework.

After many years of market experience, the PostSharp team began to provide highly sophisticated and optimized ready-made implementations of the most common design patterns after they realized most customers were implementing the same aspects over and over again.

PostSharp currently provides ready-made implementations for the following design patterns:

  • Multithreading: reader-writer-synchronized threading model, actor threading model, thread-exclusive threading model, thread dispatching;
  • Diagnostics: high-performance and detailed logging to a variety of back-ends including NLog and Log4Net;
  • INotifyPropertyChanged: including support for composite properties and dependencies on other objects;
  • Contracts: validation of parameters, fields, and properties.

Now, with ready-made implementations of design patterns, teams can start enjoying the benefits of AOP without learning AOP.

Summary

So-called high-level languages such as Java and C# still force developers to write code at an irrelevant level of abstraction. Because of the limitations of mainstream compilers, developers are forced to write a lot of boilerplate code, adding to the cost of developing and maintaining applications. Boilerplate stems from massive implementation of patterns by hand, in what may be the largest use of copy-paste inheritance in the industry.

The inability to automate design pattern implementation probably costs billions to the software industry, not even counting the opportunity cost of having qualified software engineers spending their time on infrastructure issues instead of adding business value.

However, a large amount of boilerplate could be removed if we had smarter compilers to allow us to automate implementation of the most common patterns. Hopefully, future language designers will understand design patterns are first-class citizens of modern application development, and should have appropriate support in the compiler.

But actually, there is no need to wait for new compilers. They already exist, and are mature. Aspect-oriented programming was specifically designed to address the issue of boilerplate code. Both AspectJ and PostSharp are mature implementations of these concepts, and are used by the largest companies in the world. And both PostSharp and Spring Roo provide ready-made implementations of the most common patterns. As always, early adopters can get productivity gains several years before the masses follow.

What is Kendo UI

Kendo UI is an HTML5, jQuery-based framework for building modern web apps. The framework features lots of UI widgets, a rich data vizualization framework, an auto-adaptive Mobile framework, and all of the tools needed for HTML5 app development, such as Data Binding, Templating, Drag-and-Drop API, and more.

Kendoui

 

Kendo UI comes in different bundles:

  • Kendo UI Web – HTML5 widgets for desktop browsing experience.
  • Kendo UI DataViz – HTML5 data vizualization widgets.
  • Kendo UI Mobile – HTML5 framework for building hybrid mobile applications.
  • Kendo UI Complete – includes Kendo UI Web, Kendo UI DataViz and Kendo UI Mobile.
  • Telerik UI for ASP.NET MVC – Kendo UI Complete plus ASP.NET MVC wrappers for Kendo UI Web, DataViz and Mobile.
  • Telerik UI for JSP – Kendo UI Complete plus JSP wrappers for Kendo UI Web and Kendo UI DataViz.
  • Telerik UI for PHP – Kendo UI Complete plus PHP wrappers for Kendo UI Web and Kendo UI DataViz.

Installing and Getting Started with Kendo UI

You can download all Kendo UI bundles from the download page.

The distribution zip file contains the following:

  • /examples – quick start demos.
  • /js – minified JavaScript files.
  • /src – complete source code. Not available in the trial distribution.
  • /styles – minified CSS files and theme images.
  • /wrappers – server-side wrappers. Available in Telerik UI for ASP.NET MVC, JSP or PHP.
  • changelog.html – Kendo UI release notes.

Using Kendo UI

To use Kendo UI in your HTML page you need to include the required JavaScript and CSS files.

Kendo UI Web

  1. Download Kendo UI Web and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI Web distribution to your web application root directory.
  3. Include the Kendo UI Web JavaScript and CSS files in the head tag of your HTML page. Make sure the common CSS file is registered before the theme CSS file. Also make sure only one combined script file is registered. For more information, please refer to the Javascript Dependencies page.
    <!-- Common Kendo UI Web CSS -->
    <link href="styles/kendo.common.min.css" rel="stylesheet" />
    <!-- Default Kendo UI Web theme CSS -->
    <link href="styles/kendo.default.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI Web combined JavaScript -->
    <script src="js/kendo.web.min.js"></script>
    
  4. Initialize a Kendo UI Web Widget (the KendoDatePicker in this example):
    <!-- HTML element from which the Kendo DatePicker would be initialized -->
    <input id="datepicker" />
    <script>
    $(function() {
        // Initialize the Kendo DatePicker by calling the kendoDatePicker jQuery plugin
        $("#datepicker").kendoDatePicker();
    });
    </script>
    

Here is the complete example:

<!--doctype html>
<html>
    <head>
        <title>Kendo UI Web</title>
        <link href="styles/kendo.common.min.css" rel="stylesheet" />
        <link href="styles/kendo.default.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.web.min.js"></script>
    </head>
    <body>
        <input id="datepicker" />
        <script>
            $(function() {
                $("#datepicker").kendoDatePicker();
            });
        </script>
    </body>
</html>

Kendo UI DataViz

  1. Download Kendo UI DataViz and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI DataViz distribution to your web application root directory.
  3. Include the Kendo UI DataViz JavaScript and CSS files in the head tag of your HTML page:
    <!-- Kendo UI DataViz CSS -->
    <link href="styles/kendo.dataviz.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI DataViz combined JavaScript -->
    <script src="js/kendo.dataviz.min.js"></script>
    
  4. Initialize a Kendo UIDataViz Widget (the Kendo Radial Gauge in this example):
    <!-- HTML element from which the Kendo Radial Gauge would be initialized -->
    <div id="gauge"></div>
    <script>
    $(function() {
        $("#gauge").kendoRadialGauge();
    });
    </script>
    

Here is the complete example:

<!--doctype html>
<html>
    <head>
        <title>Kendo UI DataViz</title>
        <link href="styles/kendo.dataviz.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.dataviz.min.js"></script>
    </head>
    <body>
        <div id="gauge"></div>
        <script>
        $(function() {
            $("#gauge").kendoRadialGauge();
        });
        </script>
    </body>
</html>

Kendo UI Mobile

  1. Download Kendo UI Mobile and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI Mobile distribution to your web application root directory.
  3. Include the Kendo UI Mobile JavaScript and CSS files in the head tag of your HTML page:
    <!-- Kendo UI Mobile CSS -->
    <link href="styles/kendo.mobile.all.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI Mobile combined JavaScript -->
    <script src="js/kendo.mobile.min.js"></script>
    
  4. Initialize a Kendo Mobile Application
    <!-- Kendo Mobile View -->
    <div data-role="view" data-title="View" id="index">
        <!--Kendo Mobile Header -->
        <header data-role="header">
            <!--Kendo Mobile NavBar widget -->
            <div data-role="navbar">
                <span data-role="view-title"></span>
            </div>
        </header>
        <!--Kendo Mobile ListView widget -->
        <ul data-role="listview">
          <li>Item 1</li>
          <li>Item 2</li>
        </ul>
        <!--Kendo Mobile Footer -->
        <footer data-role="footer">
            <!-- Kendo Mobile TabStrip widget -->
            <div data-role="tabstrip">
                <a data-icon="home" href="#index">Home</a>
                <a data-icon="settings" href="#settings">Settings</a>
            </div>
        </footer>
    </div>
    <script>
    // Initialize a new Kendo Mobile Application
    var app = new kendo.mobile.Application();
    </script>
    

Here is the complete example:

<!--doctype html>
<html>
    <head>
        <title>Kendo UI Mobile</title>
        <link href="styles/kendo.mobile.all.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.mobile.min.js"></script>
    </head>
    <body>
        <div data-role="view" data-title="View" id="index">
            <header data-role="header">
                <div data-role="navbar">
                    <span data-role="view-title"></span>
                </div>
            </header>
            <ul data-role="listview">
              <li>Item 1</li>
              <li>Item 2</li>
            </ul>
            <footer data-role="footer">
                <div data-role="tabstrip">
                    <a data-icon="home" href="#index">Home</a>
                    <a data-icon="settings" href="#settings">Settings</a>
                </div>
            </footer>
        </div>
        <script>
        var app = new kendo.mobile.Application();
        </script>
    </body>
</html>

Server-side wrappers

Kendo UI provides server-side wrappers for ASP.NET, PHP and JSP. Those are classes (ASP.NET and PHP) or XML tags (JSP) which allow configuring the Kendo UI widgets with server-side code.

You can find more info about the server-side wrappers here:

  • Get Started with Telerik UI for ASP.NET MVC
  • Get Started with Telerik UI for JSP
  • Get Started with Telerik UI for PHP

Next Steps

Kendo UI videos

You can watch the videos in the Kendo UI YouTube channel.

Kendo UI Dojo

A lot of interactive tutorials are available in the Kendo UI Dojo.

Further reading

  1. Kendo UI Widgets
  2. Data Attribute Initialization
  3. Requirements

Examples

  1. Online demos
  2. Code library projects
  3. Examples availableongithub
    • ASP.NET MVC examples
    • ASP.NET MVC Kendo UI Music Store
    • ASP.NET WebForms examples
    • JSP examples
    • Kendo Mobile Sushi
    • PHP examples
    • Ruby on Rails examples

Help Us Improve Kendo UI Documentation, Samples, Tutorials and Demos

The Kendo UI team would LOVE your help to improve our documentation. We encourage you to contribute in the way that you choose:

Submit a New Issue at GitHub

Open a new issue on the topic if it does not exist already.When creating an issue, please provide a descriptive title, be as specific as possible and link to the document in question. If you can provide a link to the closest anchor to the issue, that is even better.

Update the Documentation at GitHub

This is the most direct method. Follow the contribution instructions. The basic steps are that you fork our documentation and submit a pull request. That way you can contribute to exactly where you found the error and our technical writing team just needs to approve your change request. Please use only standard Markdown and follow the directions at the link. If you find an issue in the docs, or even feel like creating new content, we are happy to have your contributions!

Forums

You can also go to the Kendo UI Forums and leave feedback. This method will take a bit longer to reach our documentation team, but if you like the accountability of forums and you want a fast reply from our amazing support team, leaving feedback in the Kendo UI forums guarantees that your suggestion has a support number and that we’ll follow up on it.Thank you for contributing to the Kendo UI community!

Microsoft Research tachles Ecosystem Modelling Rate

Peter Lee, the head of Microsoft Research shared some  highlights of the organization, in a recent interview with Scientific American:

Microsoft Research has

  • 1,100 Researchers
  • 13 Laboratories around the world
    • with a 14th opening soon in Brazil

To put it in perspective, Microsoft has

Making Microsoft Research about 1% of the organization.

In order to keep with the mission of:

“promoting open publication of all research results
and encouraging deep collaborations with academic researchers.”

 

Microsoft Research crafted the following

Open Access Policy

  • Retention of Rights:
    Microsoft Research retains a license to make our Works available to the research community in our online Microsoft Research open-access repository. 
  • Authorization to enter into publisher agreements
    Microsoft researchers are authorized to enter into standard publication agreements with Publishers on behalf of Microsoft,  subject to the rights retained by Microsoft as per the previous paragraph.
  • Deposit
    Microsoft Research will endeavor to make every Microsoft Research-authored article available to the public in an open-access repository.

The Open Access Policy introduction states:

“Microsoft Research is committed to disseminating the fruits of its research and scholarship as widely as possible because we recognize the benefits that accrue to scholarly enterprises from such wide dissemination, including more thorough review, consideration and critique, and general increase in scientific, scholarly and critical knowledge.

 

In adition to adopting this policy, Microsoft Research also:

“…encourage researchers with whom we collaborate, and to whom we provide support, to embrace open access policies, and we will respect the policies enacted by their institutions.”

The MSDN blog closes with perspective on the ongoing changes in the structure of scientific publishing:

We are undoubtedly in the midst of a transition in academic publishing—a transition affecting publishers, institutions, librarians and curators, government agencies, corporations, and certainly researchers—in their roles both as authors and consumers. We know that there remain nuances to be understood and adjustments to be made, but we are excited and optimistic about the impact that open access will have on scientific discovery.

 

The MSDN blog was authored by

  • Jim Pinkelman, Senior Director, Microsoft Research Connections, and
  • Alex Wade, Director for Scholarly Communication, Microsoft Research

 

Microsoft Research Tackles Ecosystem Modeling Rate This Josh Henretig 17 Jan 2013 8:03 AM Comments 0 What if there was a giant computer model that could dramatically enhance our understanding of the environment and lead to policy decisions that better support conservation and biodiversity?

 

A team of researchers at Microsoft Research are building just such a model that one day may eventually do just that, and have published an article today in Nature (paid access) arguing for other scientists to get on board and try doing the same. When Drew Purves, head of Microsoft’s Computational Ecology and Environmental Science Group (CEES) and his colleagues at Microsoft Research in Cambridge, United Kingdom, began working with the United Nations Environment Programme World Conservation Monitoring Center (UNEP-WCMC), they didn’t know they would end up modeling life at global scales.

 

“UNEP-WCMC is an international hub of important conservation activity, and we were pretty open-minded about exactly what we might do together,” says Purves. But they quickly realized that what was really needed was a general ecosystem model (GEM) – something that hasn’t been possible to date because of the vast scale involved. In turn, findings from a GEM could contribute to better informed policy decisions about biodiversity. But first, a primer on terminology. A GCM (general circulation model) is a mathematical model that mimics the physics and chemistry of the planet’s land, ocean and atmosphere. While scientists use these models to better understand how the earth’s climate systems work, they are also used to make predictions about climate change and inform public policy. Because these models have been so successful, members of the conservation community are looking for a model that could improve their understanding of biodiversity.

 

Building a GEM is challenging—but not impossible. Microsoft Research and the UNEP-WCMC have spent the past two years developing a prototype GEM for terrestrial and marine ecosystems. The prototype is dubbed the Madingley Model, and is built on top of another hugely ambitious project that the group just finished, modeling the global carbon cycle. With this as starting point, they set out to model all animal life too: herbivores, omnivores, and carnivores, of all sizes, on land in the sea. The Computational Ecology group were in a unique position to do this, because the group includes actual ecologists (like Purves), doing novel research within Microsoft Research itself. In addition, they’re developing novel software tools for doing this kind of science. That has helped the team as it’s come up against all kinds of computational and technical challenges.

 

Nonetheless, the model’s outputs have been widely consistent with current understandings of ecosystems. One challenge is that while some of the data needed to create an effective GEM has already been collected and is stored away in research institutions, more data is needed. A new major data-gathering program would be expensive, so supporters of GEMs are calling on governments around the world to support programs that manage large-scale collection of ecological and climate data. But if you build it, will they come?

 

Drew Purves knows building a realistic GEM is possible, but he believes the real challenge is constructing a model that will enable policy makers to manage our natural resources better – and that means making sure the predictions are accurate. If such an accurate, trustworthy model can be achieved, one day conservationists will be able to couple data from GEMs and models from other fields to provide a more comprehensive guide to global conservation policy. Finding solutions to climate change and ecosystem preservation is too big of a challenge for any one entity to tackle in isolation.

 

And that’s exactly why we think that computer modeling has potential. It’s another great example of the continually evolving role that technology will play in addressing the environmental challenges facing the planet—and we’re honored to be working hand in hand with the United Nations Environment Programme to begin solving those challenges. –

See more at: http://blogs.msdn.com/b/microsoft-green/archive/2013/01/17/microsoft-research-tackles-ecosystem-modeling.aspx#sthash.okxV6we1.dpuf

NEW “Filter My Lists” Web Part now available + FREE Metro UI Master Page when ordering

“Filter My Lists” Web Part

Saves you time with optimal performance

Find what you are looking for with a few clicks, even in cluttered sites and lists with masses of items and documents.

Find exactly what you need and stop wasting your time browsing SharePoint.
Filter the content of multiple lists and libraries in a single   step.

Combine search and metadata filters

In a single panel combine item, document and attachment searches with metadata keyword searches and managed metadata filters.

Select multiple filter values from drop-down lists or alternatively use the keyword search of metadata fields with the help of wildcard characters and logical operators.

Export filtered views to Excel

Export filtered views and data to Excel. A print view enables you to print your results in a clear printable format with a single  click.

Keep views clear and concise

Provides a complete set of filters without cluttering list views and keeps your list views clear, concise and speedy. Enables you to filter SharePoint using columns which aren’t visible in list views.

Refine filters and save them for future use, whether private, to share with others or to use as default filters.

FREE Metro Style UI Master Page

 

Screen Capture Medium

Modern UI Master Page and Styles for SharePoint 2010.

This will give the Metro/Modern UI styling of SharePoint 2013 to your SharePoint 2010 team sites.

Features include:
– Quick launch styling
– Global navigation and drop-down styling
– Search box styling and layout change
– Web part header styling
– Segoe UI font

SharePoint 2013 Basic Search Center Branding Problem

So, I had thought we were in the clear from the old 2010 Search Center branding disaster.

For the most part custom branding applies pretty easily to search sites in SharePoint 2013 thanks to the fact that it just uses the default Seattle.master for search branding.

?????????????????????????????????????????????

 

However there is a gotcha, specifically related to the Basic Search Center template. I think the problem is only this one template, but maybe there are other areas affected. I tested the Enterprise Search Center and the default search and neither had issues.

Basically what happens is when you are creating your custom branding, chances are you will be applying a customized master page (one that is edited with a mapped drive or SharePoint Designer), and the Basic Search Center uses a snippet of code block to try to hide the ribbon when the Web Part management panel is up (I have no idea why this was so important but I digress).

Okay, “so what” you might think… well code blocks are not permitted to run by default in customized master pages. They will work just fine in a custom master page deployed with a farm solution (according to comments below a sandbox solution will not fix the problem) but they will fail miserably in a customized master page like this:

4-27-2013 4-05-07 PM

So, how do you fix this problem. Well, easiest solution is to package your custom master page into a farm solution and apply it to the site. The error should go away immediately. That doesn’t really help if you are still iterating in development or if you are using SharePoint Online (farm solutions are not allowed there).

Another option is to edit the aspx files on the Basic Search Site. From a mapped drive or from SPD you can edit default.aspx and results.aspx removing this StyleBlock section:



  <SharePoint:StyleBlock runat="server"> 
    <%          
    WebPartManager webPartManager = SPWebPartManager.GetCurrentWebPartManager(this.Page);
    if (webPartManager != null && webPartManager.DisplayMode == SPWebPartManager.BrowseDisplayMode)
    { 
    %>#s4-ribbonrow
    { 
    display: none;
    }
    <%                                          
    }
  %>

Note: one gotcha you may run into with this method is sometimes the search web parts will error on the page when you refresh it. You can fix this by removing the old web parts and re-adding them. I’m not sure why you have to do this sometimes, but it’s a relatively painless fix.

For some of you, editing these search files won’t be an acceptable solution. I’m hopeful someone will create a nice sandbox solution to fix the problem like we had in 2010…

SharePoint Samurai