Category Archives: Architecture

How to : From the Trenches – Use SharePoint to Implement an ALM in Your Orginisation

After my successful creation and implementation of an ALM for Business Connexion using the SharePoint Platform, I thought I’d share the lessons I have learned and show you step for step how you can implement your own ALM leveraging the power of the SharePoint Platform

slide5[1]

In this article,

  • An Overview : SharePoint Application Lifecycle Management:
  • Learn how to plan and manage Application Lifecycle Management (ALM) in Microsoft SharePoint 2010 projects by using Microsoft Visual Studio 2010 and Microsoft SharePoint Designer 2010.
  • Also learn what to consider when setting up team development environments,
  • Establishing upgrade management processes,
  • Creating a standard SharePoint development model.
  • Extending your SharePoint ALM to include other Departments like Java, Mobile, .Net and even SAP Development
Introduction to Application Lifecycle Management in SharePoint 2010

The Microsoft SharePoint 2010 development platform, which includes Microsoft SharePoint Foundation 2010 and Microsoft SharePoint Server 2010, contains many capabilities to help you develop, deploy, and update customizations and custom functionalities for your SharePoint sites. The activities that take advantage of these capabilities all fall under the category of Application Lifecycle Management (ALM).

Key considerations when establishing ALM processes include not only the development and testing practices that you use before the initial deployment of a single customization, but also the processes that you must implement to manage updates and integrate customizations and custom functionality on an existing farm.

This article discusses the capabilities and tools that you can use when implementing an ALM process on a SharePoint farm, and also specific concerns and things to consider when you create and hone your ALM process for SharePoint development.

This article assumes that each development team will develop a unique ALM process that fits its specific size and needs, so its guidance is necessarily broad. However, it also assumes that regardless of the size of your team and the specific nature of your custom solutions, you will need to address similar sets of concerns and use capabilities and tools that are common to all SharePoint developers.

The guidance in this article will help you as create a development model that exploits all the advantages of the SharePoint 2010 platform and addresses the needs of your organization.

SharePoint Application Lifecycle Management: An Overview

Although the specific details of your SharePoint 2010 ALM process will differ according the requirements of your organizations, most development teams will follow the same general set of steps. Figure 1 depicts an example ALM process for a midsize or large SharePoint 2010 deployment. Obviously, the process and required tasks depend on the project size.

Figure 1. Example ALM process
Example ALM process

The following are the specific steps in the process illustrated in Figure 1 (see corresponding callouts 1 through 10):

  1. Someone (for example, a project manager or lead developer) collects initial requirements and turns them into tasks.
  2. Developers use Microsoft Visual Studio Team Foundation Server 2010 or other tools to track the development progress and store custom source code.
  3. Because source code is stored in a centralized location, you can create automated builds for integration and unit testing purposes. You can also automate testing activities to increase the overall quality of the customizations.
  4. After custom solutions have successfully gone through acceptance testing, your development team can continue to the pre-production or quality assurance environment.
  5. The pre-production environment should resemble the production environment as much as possible. This often means that the pre-production environment has the same patch level and configurations as the production environment. The purpose of this environment is to ensure that your custom solutions will work in production.
  6. Occasionally, copy the production database to the pre-production environment, so that you can imitate the upgrade actions that will be performed in the production environment.
  7. After the customizations are verified in the pre-production environment, they are deployed either directly to production or to a production staging environment and then to production.
  8. After the customizations are deployed to production, they run against the production database.
  9. End users work in the production environment, and give feedback and ideas concerning the different functionalities. Issues and bugs are reported and tracked through established reporting and tracking processes.
  10. Feedback, bugs, and other issues in the production environment are turned into requirements, which are prioritized and turned into developer tasks. Figure 2 shows how multiple developer teams can work with and process bug reports and change requests that are received from end users of the production environment. The model in Figure 2 also shows how development teams might coordinate their solution packages. For example, the framework team and the functionality development team might follow separate versioning models that must be coordinated as they track bugs and changes.
    Figure 2. Change management involving multiple developer teams
    Change management involving multiple teams

Integrating Testing and Build Verification Environments into a SharePoint 2010 ALM Process

In larger projects, quality assurance (QA) personnel might use an additional build verification or user acceptance testing (UAT) farm to test and verify the builds in an environment that more closely resembles the production environment.

Typically, a build verification farm has multiple servers to ensure that custom solutions are deployed correctly. Figure 3 shows a potential model for relating development integration and testing environments, build verification farms, and production environments. In this particular model, the pre-production or QA farm and the production farm switch places after each release. This model minimizes any downtime that is related to maintaining the environments.

Figure 3. Model for relating development integration and testing environments
Model for relating development environments

Integrating SharePoint Designer 2010 into a SharePoint 2010 ALM Process

Another significant consideration in your ALM model is Microsoft SharePoint Designer 2010. SharePoint 2010 is an excellent platform for no-code solutions, which can be created and then deployed directly to the production environment by using SharePoint Designer 2010. These customizations are stored in the content database and are not stored in your source code repository.

General designer activities and how they interact with development activities are another consideration. Will you be creating page layouts directly within your production environment, or will you deploy them as part of your packaged solutions? There are advantages and disadvantages to both options.

Your specific ALM model depends completely on the custom solutions and the customizations that you plan to make, and on your own policies. Your ALM process does not have to be as complex as the one described in this section. However, you must establish a firm ALM model early in the process as you plan and create your development environment and before you start creating your custom solutions.

Next, we discuss specific tools and capabilities that are related to SharePoint 2010 development that you can use when considering how to create a model for SharePoint ALM that will work best for your development team.

Solution Packages and SharePoint Development Tools

One major advantage of the SharePoint 2010 development platform is that it provides the ability to save sites as solution packages. A solution package is a deployable, reusable package stored in a CAB file with a .wsp extension. You can create a solution package either by using the SharePoint 2010 user interface (UI) in the browser, SharePoint Designer 2010, or Microsoft Visual Studio 2010. In the browser and SharePoint Designer 2010 UIs, solution packages are also called templates. This flexibility enables you to create and design site structures in a browser or in SharePoint Designer 2010, and then import these customizations into Visual Studio 2010 for more development. Figure 4 shows this process.

Figure 4. Flow through the SharePoint development tools
Flow through the SharePoint development tools

When the customizations are completed, you can deploy your solution package to SharePoint for use. After modifying the existing site structure by using a browser, you can start the cycle again by saving the updated site as a solution package.

This interaction among the tools also enables you to use other tools. For example, you can design a workflow process in Microsoft Visio 2010 and then import it to SharePoint Designer 2010 and from there to Visual Studio 2010. For instructions on how to design and import a workflow process, see Create, Import, and Export SharePoint Workflows in Visio.

For more information about creating solution packages in SharePoint Designer 2010, see Save a SharePoint Site as a Template. For more information about creating solution packages in Visual Studio 2010, see Creating SharePoint Solution Packages.

Using SharePoint Designer 2010 as a Development Tool

SharePoint Designer 2010 differs from Microsoft Office SharePoint Designer 2007 in that its orientation has shifted from the page to features and functionality. The improved UI provides greater flexibility for creating and designing different functionalities. It provides rich tooling for building complete, reusable, and process-centric applications. For more information about the new capabilities and features of SharePoint Designer 2010, see Getting Started with SharePoint Designer.

You can also use SharePoint Designer 2010 to modify modular components developed with Visual Studio 2010. For example, you can create Web Parts and other controls in Visual Studio 2010, deploy them to a SharePoint farm, and then edit them in SharePoint Designer 2010.

The primary target users for SharePoint Designer 2010 are IT personnel and information workers who can use this application to create customizations in a production environment. For this reason, you must decide on an ALM model for your particular environment that defines which kinds of customizations will follow the complete ALM development process and which customizations can be done by using SharePoint Designer 2010. Developers are secondary target users. They can use SharePoint Designer 2010 as a part of their development activities, especially during initial creation of customization packages and also for rapid development and prototyping. Your ALM process must also define where and how to fit SharePoint Designer 2010 into the broader development model.

A key challenge of using SharePoint Designer 2010 is that when you use it to modify files, all of your changes are stored in the content database instead of in the file system. For example, if you customize a master page for a specific site by using SharePoint Designer 2010 and then design and deploy new branding elements inside a solution package, the changes are not available for the site that has the customized master page, because that site is using the version of the master page that is stored in the content database.

To minimize challenges such as these, SharePoint Designer 2010 contains new features that enable you to control usage of SharePoint Designer 2010 in a specific environment. You can apply these control settings at the web application level or site collection level. If you disable some action at the web application level, that setting cannot be changed at the site collection level.

SharePoint Designer 2010 makes the following settings available:

  • Allow site to be opened in SharePoint Designer 2010.
  • Allow customization of files.
  • Allow customization of master pages and layout pages.
  • Allow site collection administrators to see the site URL structure.

Because the primary purpose of SharePoint Designer 2010 is to customize content on an existing site, it does not support source code control. By default, pages that you customize by using SharePoint Designer 2010 are stored inside a versioned SharePoint library. This provides you with simple support for versioning, but not for full-featured source code control.

Ads by CeheuapMMeAd Options
Importing Solution Packages into Visual Studio 2010

When you save a site as a solution package in the browser (from the Save as Template page in Site Settings), SharePoint 2010 stores the site as a solution package (.wsp) file and places it in the Solution Gallery of that site collection. You can then download the solution package from the Solution Gallery and import it into Visual Studio 2010 by using the Import SharePoint Solution Package template, as shown in Figure 5.

Figure 5. Import SharePoint Solution Package template
Import SharePoint Solution Package template

SharePoint 2010 solution packages contain many improvements that take advantage of new capabilities that are available in its feature framework. The following list contains some of the new feature elements that can help you manage your development projects and upgrades.

  • SourceVersion for WebFeature and SiteFeature
  • WebTemplate feature element
  • PropertyBag feature element
  • $ListId:Lists
  • WorkflowAssociation feature element
  • CustomSchema attribute on ListInstance
  • Solution dependencies

After you import your project, you can start customizing it any way you like.

Note Note
Because this capability is based on the WebTemplate feature element, which is based on a corresponding site definition, the resulting solution package will contain definitions for everything within the site. For more information about creating and using web templates, see Web Templates.

Visual Studio 2010 supports source code control (as shown in Figure 6), so that you can store the source code for your customizations in a safer and more secure central location, and enable easy sharing of customizations among developers.

Figure 6. Visual Studio 2010 source code control
Visual Studio 2010 source code control

The specific way in which your developers access this source code and interact with each other depends on the structure of your team development environment. The next section of this article discusses key concerns and considerations that you should consider when you build a team development environment for SharePoint 2010.

Team Development Environment for SharePoint 2010: An Overview

As in any ALM planning process, your SharePoint 2010 planning should include the following steps:

  1. Identify and create a process for initiating projects.
  2. Identify and implement a versioning system for your source code and other deployed resources.
  3. Plan and implement version control policies.
  4. Identify and create a process for work item and defect tracking and reporting.
  5. Write documentation for your requirements and plans.
  6. Identify and create a process for automated builds and continuous integration.
  7. Standardize your development model for repeatability.

Microsoft Visual Studio Team Foundation Server 2010 (shown in Figure 7) provides a good potential platform for many of these elements of your ALM model.

Figure 7. Visual Studio 2010 Team Foundation Server
Visual Studio 2010 Team Foundation Server

When you have established your model for team development, you must choose either a collection of tools or Microsoft Visual Studio Team Foundation Server 2010 to manage your development. Microsoft Visual Studio Team Foundation Server 2010 provides direct integration into Visual Studio 2010, and it can be used to manage your development process efficiently. It provides many capabilities, but how you use it will depend on your projects.

You can use the Microsoft Visual Studio Team Foundation Server 2010 for the following activities:

  • Tracking work items and reporting the progress of your development. Microsoft Visual Studio Team Foundation Server 2010 provides tools to create and modify work items that are delivered not only from Visual Studio 2010, but also from the Visual Studio 2010 web client.
  • Storing all source code for your custom solutions.
  • Logging bugs and defects.
  • Creating, executing, and managing your testing with comprehensive testing capabilities.
  • Enabling continuous integration of your code by using the automated build capabilities.

Microsoft Visual Studio Team Foundation Server 2010 also provides a basic installation option that installs all required functionalities for source control and automated builds. These are typically the most used capabilities of Microsoft Visual Studio Team Foundation Server 2010, and this option helps you set up your development environment more easily.

Setting Up a Team Development Environment for SharePoint 2010

SharePoint 2010 must be installed on a development computer to take full advantage of its development capabilities. If you are developing only remote applications, such as solutions that use SharePoint web services, the client object model, or REST, you could potentially develop solutions on a computer where SharePoint 2010 is not installed. However, even in this case, your developers’ productivity would suffer, because they would not be able to take advantage of the full debugging experience that comes with having SharePoint 2010 installed directly on the development computer.

The design of your development environment depends on the size and needs of your development team. Your choice of operating system also has a significant impact on the overall design of your team development process. You have three main options for creating your development environments, as follows:

  1. You can run SharePoint 2010 directly on your computer’s client operating system. This option is available only when you use the 64-bit version of Windows 7, Windows Vista Service Pack 1, or Windows Vista Service Pack 2.
  2. You can use the boot to Virtual Hard Drive (VHD) option, which means that you start your laptop by using the operating system in VHD. This option is available only when you use Windows 7 as your primary operating system.
  3. You can use virtualization capabilities. If you choose this option, you have a choice of many options. But from an operational viewpoint, the option that is most likely the easiest to implement is a centralized virtualized environment that hosts each developer’s individual development environment.

The following sections take a closer look at these three options.

SharePoint 2010 on a Client Operating System

If you are using the 64-bit version of Windows 7, Windows Vista Service Pack 1, or Windows Vista Service Pack 2, you can install SharePoint Foundation 2010 or SharePoint Server 2010. For more information about installing SharePoint 2010 on supported operating systems, see Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008.

Figure 8 shows how a computer that is running a client operating system would operate within a team development environment.

Figure 8. Computer running a client operating system in a team development environment
Computer running a client operating system

A benefit of this approach is that you can take full advantage of any of your existing hardware that is running one of the targeted client operating systems. You can also take advantage of pre-existing configurations, domains, and enterprise resources that your enterprise supports. This could mean that you would require little or no additional IT support. Your developers would also face no delays (such as booting up a virtual machine or connecting to an environment remotely) in accessing their development environments.

However, if you take this approach, you must ensure that your developers have access to sufficient hardware resources. In any development environment, you should use a computer that has an x64-capable CPU, and at least 2 gigabytes (GB) of RAM to install and run SharePoint Foundation 2010; 4 GB of RAM is preferable for good performance. You should use a computer that has 6 GB to 8 GB of RAM to install and run SharePoint Server 2010.

A disadvantage of this approach is that your environments will not be centrally managed, and it will be difficult to keep all of your project-dependent environmental requirements in sync. It might also be advisable to write batch files that start and stop some of the SharePoint-related services so that when your developers are not working with SharePoint 2010, these services will not consume resources and degrade the performance of their computers.

The lack of centralized maintenance could hurt developer productivity in other ways. For example, this might be an unwieldy approach if your team is working on a large Microsoft SharePoint Online project that is developing custom solutions for multiple services (for example, the equivalents of http://intranet, http://mysite, http://teams, http://secure, http://search, http://partners, and http://www.internet.com) and deploying these solutions in multiple countries or regions.

If you are developing on a computer that is running a client operating system in a corporate domain, each development computer would have its own name (and each local domain name would be different, such as http://dev 1 or http://dev2). If each developer is implementing custom functionalities for multiple services, you must use different port numbers to differentiate each service (for example, http://dev1 for http://intranet and http://dev1:81 for http://mysite). If all of your developers are using the same Visual Studio 2010 projects, the project debugging URL must be changed manually whenever a developer takes the latest version of a project from your source code repository.

This would create a manual step that could hurt developer productivity, and it would also diminish the efficiency of any scripts that you have written for setting up development environments, because the individual environments are not standardized. Some form of centralization with virtualization is preferable for large enterprise development projects.

SharePoint 2010 on Windows 7 and Booting to Virtual Hard Drive

If you are using Windows 7, you can also create a VHD out of an existing Windows Server 2008 image on which SharePoint 2010 is installed in Windows Hyper-V, and then configure Windows 7 with BDCEdit.exe so that it boots directly to the operating system on the VHD. To learn more about this kind of configuration, see Deploy Windows on a Virtual Hard Disk with Native Boot and Boot from VHD in Win 7.

Figure 9 shows how a computer that is running Windows 7 and booting to VHD would operate within a team development environment.

Figure 9. Windows 7 and booting to VHD in a team environment
Windows 7 and booting to VHD in a team environment

An advantage of this approach is the flexibility of having multiple dedicated environments for an individual project, enabling you to isolate each development environment. Your developers will not accidentally cross-reference any artifacts within their projects, and they can create project-dependent environments.

However, this option has considerable hardware requirements, because you are using the available hardware and resources directly on your computers.

SharePoint 2010 in Centralized Virtualized Environments

In a centralized virtualized environment, you host your development environments in one centralized location, and developers access these environments through remote connections. This means that you use Windows Hyper-V in the centralized location and copy a VHD for every developer as needed. Each VHD is configured to be available from the corporate network, so that when it starts, it can be accessed by using remote connections.

Figure 10 shows how a centralized virtualized team development environment would operate.

Figure 10. Centralized virtualized team development environment
Centralized virtualized development environment

An advantage of this approach is that the hardware requirements for individual developer computers are relatively few because the actual work happens in a centralized environment. Developers could even use computers with 1 GB of RAM as their clients and then connect remotely to the centralized location. You can also manage environments easily from one centralized location, making adjustments to them whenever necessary.

Your centralized host will have significantly high hardware requirements, but developers can easily start and stop these environments. This enables you to use the hardware that you have allocated for your development environments more efficiently. Additionally, this approach provides a ready platform for more extensive testing environments for your custom code (such as multi-server farms).

After you set up your team development environment, you can start taking advantage of the deployment and upgrade capabilities that are included with the new solution packaging model in SharePoint 2010. The following sections describe how to take advantage of these new capabilities in your ALM model.

Models for Solution Lifecycle Management in SharePoint 2010

The SharePoint 2010 solution packaging model provides many useful features that will help you plan for deploying custom solutions and managing the upgrade process. You can implement assembly versioning by applying binding redirects in your web application configuration file. You can also apply versioning to your feature upgrades, and feature upgrade actions enable you to manage changes that will be necessary on your existing sites to accommodate feature upgrades. These upgrade actions can be handled declaratively or programmatically.

The feature upgrade query object model enables you to create queries in your code that look for features on your existing sites that can be upgraded. You can use this object model to obtain relevant information about all of the features and feature versions that are deployed on your SharePoint 2010 sites. In your solution manifest file, you can also configure the type of Internet Information Services (IIS) recycling to perform during a solution upgrade.

The following sections go into greater details about these capabilities and how you can use them.

Using Assembly BindingRedirect with SharePoint 2010 Assemblies

The BindingRedirect feature element can be added to your web applications configuration file. It enables you to redirect from earlier versions of installed assemblies to newer versions. Figure 11 shows how the XML configuration from the solution manifest file instructs SharePoint to add binding redirection rules to the web application configuration file. These rules forward any reference to version 1.0 of the assembly to version 2.0. This is required in your solution manifest file if you are upgrading a custom solution that uses assembly versioning and if there are existing instances of the solution and the assembly on your sites.

Figure 11. Binding redirection rules in a solution manifest file
Binding redirection rules in a solution manifest

It is a best practice to use assembly versioning, because it gives you an easy way to track the versions of a solution that are deployed to your production environments.

SharePoint 2010 Feature Versioning

The support for feature versioning in SharePoint 2010 provides many capabilities that you can use when you are upgrading features. For example, you can use the SPFeature.Version property to determine which versions of a feature are deployed on your farm, and therefore which features must be upgraded. For a code sample that demonstrates how to do this, see Version.

Feature versioning in SharePoint 2010 also enables you to define a value for the SPFeatureDependency.MinimumVersion property to handle feature dependencies. For example, you can use the MinimumVersion property to ensure that a particular version of a dependent feature is activated. Feature dependencies can be added or removed in each new version of a feature.

The SharePoint 2010 feature framework has also enhanced the object model level to support feature versioning more easily. You can use the QueryFeatures method to retrieve a list of features, and you can specify both feature version and whether a feature requires an upgrade. The QueryFeatures method returns an instance of SPFeatureQueryResultCollection, which you can use to access all of the features that must be updated. This method is available from multiple scopes, because it is available from the SPWebService, SPWebApplication, SPContentDatabase, and SPSite classes. For more information about this overloaded method, see QueryFeatures(), QueryFeatures(), QueryFeatures(), and QueryFeatures(). For an overview of the feature upgrade object model, see Feature Upgrade Object Model.

The following section summarizes many of the new upgrade actions that you can apply when you are upgrading from one version of a feature to another.

SharePoint 2010 Feature Upgrade Actions

Upgrade actions are defined in the Feature.xml file. The SPFeatureReceiver class contains a FeatureUpgrading method, which you can use to define actions to perform during an upgrade. This method is called during feature upgrade when the feature’s Feature.xml file contains one or more <CustomUpgradeAction> tags, as shown in the following example.

<UpgradeActions>
  <CustomUpgradeAction Name="text">
    ...
  </CustomUpgradeAction>
</UpgradeActions>

Each custom upgrade action has a name, which can be used to differentiate the code that must be executed in the feature receiver. As shown in following example, you can parameterize custom action instances.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <!-- First action-->
      <CustomUpgradeAction Name="example">
        <Parameters>
          <Parameter Name="parameter1">Whatever</Parameter>
          <Parameter Name="anotherparameter">Something meaningful</Parameter>
          <Parameter Name="thirdparameter">additional configurations</Parameter>
        </Parameters>
      </CustomUpgradeAction>
      <!-- Second action-->
      <CustomUpgradeAction Name="SecondAction">
        <Parameters>
          <Parameter Name="SomeParameter1">Value</Parameter>
          <Parameter Name="SomeParameter2">Value2</Parameter>
          <Parameter Name="SomeParameter3">Value3</Parameter>
        </Parameters>
      </CustomUpgradeAction>
    </VersionRange>
  </UpgradeActions>
</Feature>

This example contains two CustomUpgradeAction elements, one named example and the other named SecondAction. Both elements have different parameters, which are dependent on the code that you wrote for the FeatureUpgrading event receiver. The following example shows how you can use these upgrade actions and their parameters in your code.

 <summary>
 Called when feature instance is upgraded for each of the custom upgrade actions in the Feature.xml file.
 </summary>
 <param name="properties">Feature receiver properties</param>
 <param name="upgradeActionName">Upgrade action name</param>
 <param name="parameters">Custom upgrade action parameters</param>

public override  FeatureUpgrading(SPFeatureReceiverProperties properties, 
                                        string upgradeActionName, 
                                        System.Collections.Generic.IDictionary<string, string> parameters)
{

    // Do not do anything, if feature scope is not correct.
     (properties.Feature.Parent  SPWeb)
    {

        // Log that feature scope is incorrect.
        return;
    }

    switch (upgradeActionName)
    {
         "example":
            FeatureUpgradeManager.UpgradeAction1(parameters["parameter1"], parameters["AnotherParameter"],
                                                 parameters["ThirdParameter"]);
            break;
         "SecondAction":
            FeatureUpgradeManager.UpgradeAction1(parameters["SomeParameter1"], parameters["SomeParameter2"],
                                                 parameters["SomeParameter3"]);
            break;
        default:

            // Log that code for action does not exist.
            break;
    }
}

You can have as many upgrade actions as you want, and you can apply them to version ranges. The following example shows how you can apply upgrade actions to version ranges of a feature.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange BeginVersion="1.0.0.0" EndVersion ="2.0.0.0">
      ...
    </VersionRange>
    <VersionRange BeginVersion="2.0.0.1" EndVersion="3.0.0.0">
      ...
    </VersionRange>
    <VersionRange BeginVersion="3.0.0.1" EndVersion="4.0.0.0">
      ...
    </VersionRange>
  </UpgradeActions>
</Feature>

The AddContentTypeField upgrade action can be used to define additional fields for an existing content type. It also provides the option of pushing these changes down to child instances, which is often the preferred behavior. When you initially deploy a content type to a site collection, a definition for it is created at the site collection level. If that content type is used in any subsite or list, a child instance of the content type is created. To ensure that every instance of the specific content type is updated, you must set the PushDown attribute to , as shown in the following example.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <AddContentTypeField ContentTypeId="0x0101002b0e208ace0a4b7e83e706b19f32cab9"
                           FieldId="{ccbcd479-94c9-4f3a-95c4-58897da434fe}"
                           PushDown="True"/>
    </VersionRange>
  </UpgradeActions>
</Feature>

For more information about working with content types programmatically, see Introduction to Content Types.

The ApplyElementManifests upgrade action can be used to apply new artifacts to a SharePoint 2010 site without reactivating features. Just as you can add new elements to any new SharePoint elements.xml file, you can instruct SharePoint to apply content from a specific elements file to sites where a given feature is activated.

You can use this upgrade action if you are upgrading an existing feature whose FeatureActivating event receiver performs actions that you do not want to execute again on sites where the feature is deployed. The following example demonstrates how to include this upgrade action in a Feature.xml file.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <ApplyElementManifests>
        <ElementManifest Location="AdditionalV2Fields\Elements.xml"/>
      </ApplyElementManifests>
    </VersionRange>
  </UpgradeActions>
</Feature>

An example of a use case for this upgrade action involves adding new .webpart files to a feature in a site collection. You can use the ApplyElementManifest upgrade action to add those files without reactivating the feature. Another example would involve page layouts, which contain initial Web Part instances that are defined in the file element structure of the feature element file. If you reactivate this feature, you will get duplicates of these Web Parts on each of the page layouts. In this case, you can use the ElementManifest element of the ApplyElementManifests upgrade action to add new page layouts to a site collection that uses the feature without reactivating the feature.

The MapFile element enables you to map a URL request to an alternative URL. The following example demonstrates how to include this upgrade action in a Feature.xml file.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <MapFile FromPath="Features\MapPathDemo_MapPathDemo\PageDeployment\MyExamplePage.aspx"
             ToPath="Features\MapPathDemo_MapPathDemo\PageDeployment\MyExamplePage2.aspx" />
  </UpgradeActions>
</Feature>

Mapping URLs in this way would be useful to you in a case where you have to deploy a new version of a page that was customized by using SharePoint Designer 2010. The resulting customized page would be served from the content database. When you deploy the new version of the page, the new version will not appear because content for that page is coming from the database and not from the file system. You could work around this problem by using the MapFile element to redirect requests for the old version of the page to the newer version.

It is important to understand that the FeatureUpgrading method is called for each feature instance that will be updated. If you have 10 sites in your site collection and you update a web-scoped feature, the feature receiver will be called 10 times for each site context. For more information about how to use these new declarative feature elements, see Feature.xml Changes.

Upgrading SharePoint 2010 Features: A High-Level Walkthrough

This section describes at a high level how you can put these feature-versioning and upgrading capabilities to work. When you create a new version of a feature that is already deployed on a large SharePoint 2010 farm, you must consider two different scenarios: what happens when the feature is activated on a new site and what happens on sites where the feature already exists. When you add new content to the feature, you must first update all of the existing definitions and include instructions for upgrading the feature where it is already deployed.

For example, perhaps you have developed a content type to which you must add a custom site column named City. You do this in the following way:

  1. Add a new element file to the feature. This element file defines the new site column and modifies the Feature.xml file to include the element file.
  2. Update the existing definition of the content type in the existing feature element file. This update will apply to all sites where the feature is newly deployed and activated.
  3. Define the required upgrade actions for the existing sites. In this case, you must ensure that the newly added element file for the additional site column is deployed and that the new site column is associated with the existing content types. To achieve these two objectives, you add the ApplyElementManifests and the AddContentTypeField upgrade actions to your Feature.xml file.

When you deploy the new version of the feature to existing sites and upgrade it, the upgrade actions are applied to sites one by one. If you have defined custom upgrade actions, the FeatureUpgrading method will be called as many times as there are instances of the feature activated in your site collection or farm.

Figure 12 shows how the different components of this scenario work together when you perform the upgrade.

Figure 12. Components of a feature upgrade that adds a new element to an existing feature
Add a new element to an existing feature

Different sites might have different versions of a feature deployed on them. In this case, you can create version ranges, which define specific actions to perform when you are upgrading from one version to another. If a version range is not defined, all upgrade actions will be applied during each upgrade.

Figure 13 shows how different upgrade actions can be applied to version ranges.

Figure 13. Applying different upgrade actions to version ranges.
Applying upgrade actions to version ranges

In this example, if a given site is upgrading directly from version 1.0 to version 3.0, all configurations will be applied because you have defined specific actions for upgrading from version 1.0 to version 2.0 and from 2.0 to version 3.0. You have also defined actions that will be applied regardless of feature version.

Code Design Guidelines for Upgrading SharePoint 2010 Features

To provide more flexibility for your code, you should not place your upgrade code directly inside the FeatureUpgrading event receiver. Instead, put the code in some centralized location and refer to it inside the event receiver, as shown in Figure 14.

Figure 14. Centralized feature upgrade manager
Centralized feature upgrade manager

By placing your upgrade code inside a centralized utility class, you increase both the reusability and the testability of your code, because you can perform the same actions in multiple locations. You should also try to design your custom upgrade actions as generically as possible, using parameters to make them applicable to specific upgrade scenarios.

Solution Lifecycles: Upgrading SharePoint 2010 Solutions

If you are upgrading a farm (full-trust) solution, you must first deploy the new version of your solution package to a farm.

Execute either of the following scripts from a command prompt to deploy updates to a SharePoint farm. The first example uses the Stsadm.exe command-line tool.

stsadm -o upgradesolution -name solution.wsp -filename solution.wsp

The second example uses the Update-SPSolution Windows PowerShell cmdlet.

UpdateSPSolution Identity contoso_solution.wsp LiteralPath c:\contoso_solution_v2.wsp GACDeployment

After the new version is deployed, you can perform the actual upgrade, which executes the upgrade actions that you defined in your Feature.xml files.

A farm solution upgrade can be performed either farm-wide or at a more granular level by using the object model. A farm-wide upgrade is performed by using the Psconfig command-line tool, as shown in the following example.

psconfig -cmd upgrade -inplace b2b
NoteNote
This tool causes a service break on the existing sites. During the upgrade, all feature instances throughout the farm for which newer versions are available will be upgraded.

You can also perform upgrades for individual features at the site level by using the Upgrade method of the SPFeature class. This method causes no service break on your farm, but you are responsible for managing the version upgrade from your code. For a code example that demonstrates how to use this method, see SPFeature.Upgrade.

Upgrading a sandboxed solution at the site collection level is much more straightforward. Just upload the SharePoint solution package (.wsp file) that contains the upgraded features. If you have a previous version of a sandboxed solution in your solution gallery and you upload a newer version, an Upgrade option appears in the UI, as shown in Figure 15.

Figure 15. Upgrading a sandboxed solution
Upgrading a sandboxed solution

After you select the Upgrade option and the upgrade starts, all features in the sandboxed solution are upgraded.

Conclusion

This article has discussed some considerations and examples of Application Lifecycle Management (ALM) design that are specific to SharePoint 2010, and it has also enumerated and described the most important capabilities and tools that you can integrate into the ALM processes that you choose to establish in your enterprise. The SharePoint 2010 feature framework and solution packaging model provide flexibility and power that you can put to work in your ALM processes.

How To : Use the Modelling SDK to create UML Diagrams

Use Case Diagrams

A use case diagram is a summary of who uses your application and what they can do with it. It
describes the relationships among requirements, users, and the major components of the system, and
provides an overall view of how the system is used.

uml+activity+diagram+library+mgmt+book+return[1]
Activity Diagrams
Use case diagrams can be broken down into activity diagrams. An activity diagram shows the software
process as the fl ow of work through a series of actions. It can be a useful exercise to draw an
activity diagram showing the major tasks that a user will perform with the software application.

 

Sequence Diagrams

 

Sequence diagrams display interactions between different objects. This interaction usually takes
place as a series of messages between the different objects. Sequence diagrams can be considered an
alternate view to the activity diagram. A sequence diagram can show a clear view of the steps in a
use case. Figure 14-3 shows an example of a sequence diagram.
Component Diagrams

 

Component diagrams help visualize the high-level structure of the software system. They show the
major parts of a system and how those parts interact and depend on each other. One nice feature of
component diagrams is that they show how the different parts of the design interact with each other,
regardless of how those individual parts are actually implemented. Figure 14-4 shows an example of
a component diagram.

 

Class Diagrams

 

Class diagrams describe the objects in the application system. They do this without referencing any
particular implementation of the system itself. This type of UML modeling diagram is also referred
to as a conceptual class diagram. Figure 14-5 shows an example of a class diagram.

How to: Export UML Diagrams to Image Files

You can export a UML document from Visual Studio to an image that is under program control. For example, you might want to do this as part of automatic document generation.

If you want to export a document to an image manually, you can copy and paste the shapes from a diagram into other programs such as Word. You can also print documents to XPS format. For more information, see Export Images of Diagrams.

The following code defines a shortcut menu command, also known as a context menu command, that saves an image to a file.

Note Note

To make this code work as a menu command, you must incorporate it into a MEF component. For more information, seeHow to: Define a Menu Command on a Modeling Diagram.

The code first uses GetObject<T> to get the Diagram of the underlying implementation. This type has a methodCreateBitmap.

namespace SaveToImage
{
  using System.ComponentModel.Composition; // for [Import], [Export]
  using System.Drawing; // for Bitmap
  using System.Drawing.Imaging; // for ImageFormat
  using System.Linq; // for collection extensions
  using System.Windows.Forms; // for SaveFileDialog
  using Microsoft.VisualStudio.Modeling.Diagrams;
    // for Diagram
  using Microsoft.VisualStudio.Modeling.ExtensionEnablement;
    // for IGestureExtension, ICommandExtension, ILinkedUndoContext
  using Microsoft.VisualStudio.ArchitectureTools.Extensibility.Presentation;
    // for IDiagramContext
  using Microsoft.VisualStudio.ArchitectureTools.Extensibility.Uml;
    // for designer extension attributes


  /// 
  /// Called when the user clicks the menu item.
  /// 
  // Context menu command applicable to any UML diagram 
  [Export(typeof(ICommandExtension))]
  [ClassDesignerExtension]
  [UseCaseDesignerExtension]
  [SequenceDesignerExtension]
  [ComponentDesignerExtension]
  [ActivityDesignerExtension]
  class CommandExtension : ICommandExtension
  {
    [Import]
    IDiagramContext Context { get; set; }

    public void Execute(IMenuCommand command)
    {
      // Get the diagram of the underlying implementation.
      Diagram dslDiagram = Context.CurrentDiagram.GetObject();
      if (dslDiagram != null)
      {
        string imageFileName = FileNameFromUser();
        if (!string.IsNullOrEmpty(imageFileName))
        {
          Bitmap bitmap = dslDiagram.CreateBitmap(
           dslDiagram.NestedChildShapes,
           Diagram.CreateBitmapPreference.FavorClarityOverSmallSize);
          bitmap.Save(imageFileName, GetImageType(imageFileName));
        }
      }
    }

    /// 
    /// Called when the user right-clicks the diagram.
    /// Set Enabled and Visible to specify the menu item status.
    /// 
    ///
    public void QueryStatus(IMenuCommand command)
    {
      command.Enabled = Context.CurrentDiagram != null 
        && Context.CurrentDiagram.ChildShapes.Count() > 0;
    }

    /// 
    /// Menu text.
    /// 
    public string Text
    {
      get { return "Save To Image..."; }
    }


    /// 
    /// Ask the user for the path of an image file.
    /// 
    /// image file path, or null
    private string FileNameFromUser()
    {
      SaveFileDialog dialog = new SaveFileDialog();
      dialog.AddExtension = true;
      dialog.DefaultExt = "image.bmp";
      dialog.Filter = "Bitmap ( *.bmp )|*.bmp|JPEG File ( *.jpg )|*.jpg|Enhanced Metafile (*.emf )|*.emf|Portable Network Graphic ( *.png )|*.png";
      dialog.FilterIndex = 1;
      dialog.Title = "Save Diagram to Image";
      return dialog.ShowDialog() == DialogResult.OK ? dialog.FileName : null;
    }

    /// 
    /// Return the appropriate image type for a file extension.
    /// 
    ///
    /// 
    private ImageFormat GetImageType(string fileName)
    {
      string extension = System.IO.Path.GetExtension(fileName).ToLowerInvariant();
      ImageFormat result = ImageFormat.Bmp;
      switch (extension)
      {
        case ".jpg":
          result = ImageFormat.Jpeg;
          break;
        case ".emf":
          result = ImageFormat.Emf;
          break;
        case ".png":
          result = ImageFormat.Png;
          break;
      }
      return result;
    }
  }
}

How To : Design the Physical Architecture to Support Collaborative Development and ALM of SharePoint Foundation 2010 Application

Introduction

This article explains the physical architecture which fits best in collaborative development and ALM of SharePoint Foundation 2010 application and what are the servers and tools needed and how they play key roles in ALM of SharePoint Foundation 2010. The purpose of this article is to provide overall understanding of various servers and farms connected to each other in SharePoint Foundation.

Background

Basic understanding of different server OS & SharePoint Foundation 2010 is required.

Solution

Application Life-cycle Management (ALM) is the co-ordination of development life-cycle activities—including requirements, modeling, development, build, and testing. Recently, ALM has expanded beyond the application and the software development life cycle to also include business solution governance, infrastructure management, operations, and support.

You can use ALM to help align your organization in the context of a software solution in business, development, and operations. With an application development platform that supports ALM, you can provide integration between the various tools used and activities performed within each of these capabilities.

There are main four types of staging servers with standalone developer’s environment which plays a key role in ALM of SharePoint 2010 application:

  1. Development SharePoint Farm
  2. Team foundation server
  3. Integration/Testing Farm
  4. Production Farm
    +
    Developer’s Workstation

The below figure is a physical architecture which depicts how each sever is interconnected to support collaborative development and ALM for SharePoint Foundation 2010 application:

Click to enlarge image

Development SharePoint Farm

A SharePoint farm is fundamentally a collection of SharePoint role servers that provide for the base infrastructure required to house SharePoint sites. The farm level is the highest level of SharePoint architecture, providing a distinct operational boundary for a SharePoint environment. Each farm in an environment is a self-encompassing unit made up of one or more servers, such as web servers, service application servers, and SharePoint database servers.

SharePoint development farm needed for the developers in an organization that makes heavy use of SharePoint often need environments to test new applications, web parts, solutions, and other SharePoint customization. These developers often need a sandbox area where these farm level features and solutions can be tested.

I have considered two-tier topology for SharePoint Foundation 2010 farm. However it will be entirely based on the need of your application. If your application is a relatively small intranet application, then you can choose single tier topology or if you are going to integrate other search server with foundation, then you can choose three-tier topology with application server as a middle tier (Remember that SharePoint Foundation 2010 doesn’t include enterprise search). It may make sense to deploy one or more development farms so that developers have the opportunity to run their tests and develop software for SharePoint independent of the existing production environment.

There are basically two types of servers included in two-tier development farm of SharePoint Foundation 2010:

  1. Web server
  2. Content database server

In the above figure, there are three front-end web servers and one SharePoint content database server. However you can choose a single front-end web server connected to content database server based on your application need and architecture of production environment. All web servers share the same content database. This is called two-tier deployment farm where SharePoint server component and content database are installed on separate server. As I mentioned before, you can choose one-tier, two-tier or three-tier deployment topology based on your application architecture and topology of production architecture.

Each web server has SharePoint Foundation 2010 and SharePoint extension for TFS 2010 install on it. It needs SharePoint extension for TFS 2010 to connect with Team Foundation Server for source control, build management & project management.

Advantage of Development SharePoint Farm:

  1. Single place where SharePoint Admin can integrate all the final artifacts from multiple developers.
  2. Developer can sync with latest SharePoint site on its standalone developer workstation.
  3. Admin can easily approve artifacts and migrate to integration server.
  4. It is a unit testing environment for developers where they can test dependent functionality or farm level features.

Team Foundation Server

Team Foundation Server plays a key role in ALM which provides source control, build management and work item. You can have TFS installed on the same server which has content database server but if you are going to use build management of TFS, then it is advisable to have separate Team Foundation Server because it utilizes CPU intensively when it processes the builds.

As per the above figure, there are separate Team foundation servers which are connected to SharePoint Farm as well as standalone development workstation so that it can provide source control for customized content as well as developer’s artifacts and resources.

Advantages of TFS
  1. Source control for SharePoint artifacts and customization
  2. Build management for SharePoint
  3. Work item and bug tracking tool for SharePoint
  4. Admin console for all management activity
  5. Easy integration with SharePoint foundation server and VS 2010
  6. Easy check-in & check-out
  7. Web based console to manage ALM activity

Developer’s Workstation

As per the above figure, developers’ environment includes two developers workstation. In practice, you can take as many workstations as your development team size.

Developer workstation should have Windows 7 or Windows vista operating system with standalone SharePoint foundation server with local content database. So that one developer’s work doesn’t affect another developer and he can debug artifacts locally.

Developer workstation will include the following stuff installed:

  1. Windows 7 or Windows vista 64 bit OS
  2. Stand alone SharePoint Foundation server 2010
  3. SharePoint designer 2010
  4. Visual Studio 2010 (connected to TFS)

Developer workstation should be connected to Team Foundation Server 2010 so that when developer finally completes his artifact, then he can check-in his artifact in TFS so that other developers can take the latest code from TFS if needed. This way, parallel development can happen without affecting other developer’s work.

Integration/Testing Farm

Any production SharePoint environment should have a test environment in which new SharePoint web parts, solutions, service packs, patches, and add-ons can be tested. It is critical to deploy test farms, because many SharePoint add-ons could potentially disrupt or corrupt the formatting or structure of a production environment, and trying to test these new solutions on site collections or different web applications is not enough because the solutions often install directly on the SharePoint servers themselves. If there is an issue, the issue will be reflected in the entire farm.

Integration or testing server farm should be similar to the existing environments, with the same add-ons and solutions installed and should ideally include restores of production site collections to make it as similar as possible to the existing production environment. All changes and new products or solutions installed into an environment should subsequently be tested first in this environment.

Integration/testing servers will have final SharePoint sites and site collection as per the business requirements. QA will test all the business functionality here. Customer can also do their ‘User acceptance test’ before going live to the production server.

After user acceptance test passed, all the sites & site collection will be deployed on production server.

Advantage of Integration testing server:

  1. Clean environments and same physical architecture as production
  2. QA can test all dependent business functionality at one place
  3. Customer can participate in UAT
  4. Easy deployment/migration from integration testing server to production server

Production Farm

The final stage is rolling your farm into a production environment. At this stage, you will have incorporated the necessary solution and infrastructure adjustments that were identified during the user acceptance test stage. These servers are generally in the customer’s premises. Development team and testing team do not have control over it.

There are various 3rd party tools available in the market for SharePoint data protection, administration, migration, compliance and integration.

ImageGen[1]

Summary

So this way, you can design physical architecture where Development SharePoint Farm and developer’s workstation are integrated with TFS 2010. TFS and Content database are connected to testing server or testing farm where all the artifacts and content will be integrated in testing server for QA and UAT. Finally after UAT, it will be deployed on production farm.

You can use VM (Virtual Machine) for all the servers and workstation for effective infrastructure because if server crashes due to some reason, then you can quickly create a new VM for the needed OS from images.

Note: In the above figure, integration/Testing farm and production farm is a single server just for clear understanding but it will be as large as development farm with number of front-end web server and content database server in reality. All the server OS is Windows Server 2008 R2 SP2 64 bit. Please visit here for more information on hardware & software requirements for SharePoint Foundation 2010.

A Look At : Visual Studio 2013 Update 3 CTP2

avatar[2]

New technology improvements in Visual Studio 2013 Update 3 CTP 2

 

Technology improvements

The following technology improvements were made in this release.

CodeLens

  • CodeLens jobs that are running on the Team Foundation Server job agent have been optimized for performance specifically while processing branching and merging changesets.

Debugger

  • If you have more than one monitor, Visual Studio will remember which monitor a Windows Store application was last run on.
  • You can debug x86 applications that are built by .NET native.
  • When you analyze managed memory dump files, you can go to Definition and Find All References of the selected type.
  • You can debug the dump files from .NET Native applications by using Visual Studio debugger.

General

  • The Application Insights Tools for Visual Studio are now included in Visual Studio 2013 Update 3 CTP2. This initial integration as part of CTP2 includes some software updates and performance improvements.

IntelliTrace

  • You can skip straight to the details of performance events that are exported from Application Insights to IntelliTrace.

Profiler

  • The Performance and Diagnostics hub can open profiling sessions (.diagsession files) that were exported from the F12 tools in the latest developer preview of Internet Explorer 11.
  • Windows Presentation Foundation (WPF) and Win32 applications are supported by the new Memory Usage Tool in the Performance and Diagnostics Hub. For more information about how to use the tool to troubleshoot issues in native and managed memory, go to the following blog post:
    Diagnosing memory issues with the new Memory Usage Tool in Visual Studio

Release Management

  • You can useWindowsPowerShell or theWindowsPowerShell Desired State Configuration (DSC) feature to deploy and manage configuration data. Additionally, you can deploy to the following environments without having to set up Microsoft Deployment Agent:
    • Microsoft Azure environments
    • On-premise environments (Standard environments)

Testing Tools

  • You can add custom fields and custom work flows for test plans and test suites.
  • You can use Manage Test Suites permission for granting access to test suites.
  • You can track changes to test plans and test suites by using work item history.

For more information about these features, see the following Visual Studio Developer Tools blog article:

Test Plan and Test Suite Customization with TFS2013 Update3

Visual Studio IDE

  • CodeLens authors and changes indicators are now available for Git repositories.
  • In Code Map, links are styled by using colors, and they display in the improved Legend.
  • Debugger Map automatically zooms to the call stack entry of interest and preserves user’s zoom preferences.
  • You can drag binaries from the Windows file explorer to a code map, and then start exploring binaries by using Code Map.

Known issues

Testing Tools

  • When you try to upgrade an existing TFS server that has Test management data to Visual Studio 2013 Team Foundation Server Update 3 CTP2 in JPN or CHS, the upgrade of Test Case Management service does not work.

Visual Studio IDE

  • In Visual Studio 2013 Ultimate Update 3 CTP2 localized (non en-us) drops, when trying to request a Code Map, or a Dependency Graph for the solution, the directed graph is not produced.

 

For more information on Visual Studio 2013 and other upgrades, visit http://support.microsoft.com/kb/2933779/en-us

How To : Peel back the layers of data and information and reveal meaningful BI with SharePoint

Business Intelligence (BI) often takes on the mantel of exotic, rare, and almost unattainable technology. But at its core, business intelligence is simply a method of reporting on what happened.

Image


Granted it is a type of reporting that reaches beyond an ordinary peek into the rearview mirror of past business events; business intelligence helps to spot future trends, make informed go/no-go decisions, or identify potential threats. BI technology is strongest when it rests on a large supply of valid, diverse and current data, and can leverage the proper tools to help users understand and visualize queries about that data.

This blog post is about how SharePoint 2013 can help users solve practical business information problems, even though they don’t have the time or the budget to custom build an enterprise-scale BI system. The underlying premise of this blog is – show how SharePoint 2013 can provide a reasonable cost-benefit ratio and justify investing in BI technology.


Before we jump into SharePoint 2013 and its capabilities, let’s take a high-level look at Business Intelligence.

What Problems Can BI Solve?


If the only tool you have in your toolbox is a hammer, then every problem might look like a nail. The fact is, most businesses are able to solve most problems without spending a dime on more technology. In other words, the ‘hammer’ most businesses have been using works just fine, because most of their problems look like nails. The challenge they face only comes into focus when their competition is able to solve the same type of problems, but they do it faster, cheaper, and with less effort. Obviously, this can be a doomsday scenario for the company falling behind, technologically speaking.

That said; Business Intelligence is a great tool…but what problems will it solve? Perhaps a better question would be…how do I figure out if BI can help my company? You are not alone in asking these questions. Just because we have the tools to do something amazing like BI, doesn’t mean you need it or can afford it. But it certainly would be beneficial for you to find out if and how a Business Intelligence capability would help your business.

The starting-line to find out if BI makes sense for your organization runs right through your own conference room. You need to sit down with your senior executives and managers and talk to them about the information they rely on to run their part of the business. What information do they need, when do they need it, what do they do with it, what information are they missing, and so on? Initiate this type of conversation and you will, undoubtedly, open up a window of opportunity to discuss the merits of Business Intelligence.

SharePoint 2013 and Business Intelligence

Assuming that you see value in establishing BI capabilities in your organization, a very good first step would be to evaluate Microsoft’s SharePoint 2013. Because Microsoft products are generally used throughout both the back-office and front-office of most businesses, SharePoint 2013 is a very powerful tool to integrate the data with the technical systems required to build BI capabilities.

The main theme for BI is aggregation of data from multiple sources and then making that data available when, where, and how it is needed. BI must also be in complete alignment with all corporate goals while it supports the needs of individual managers who are responsible for achieving those goals. SharePoint 2013 is designed to access information and put it in the hands of employees when and where they need it. Because of SharePoint 2013’s capabilities to enable collaboration and teamwork, its very nature aligns the goals of the business with the goals of the employees.

Data Warehousing Measures and Dimensions

Perhaps the most fundamental requirement of BI is the need for information or data. Often this data is distributed throughout multiple databases and must be aggregated in some form.

In data warehousing, which is the term used to describe the functions necessary to aggregate, store and access data for the purpose of Business Intelligence and analytics, the data is often loaded into Online Analytical Processing (OLAP) cubes. The data stored in a cube can be sorted and filtered based on measures and dimensions. This technique lets users query the cube based on practical business categories which enable calculations to be made such as sum, count, average, min/max, etc. This is called a measure.

The other characteristic used in a cube is called a dimension. Dimensions are a collection of information or references about a measureable event. Each dimension can be measured.
For example, let’s say you wanted to run a report that gives you an up-to-the-minute total on sales volume and the number of units sold for each region of your company. In this example, the regions would be the dimensions and the sales volume and number of units are the measures.

SharePoint is designed to access cubes and work with the data stored in the cube, based on the available measures and dimensions.

Key Performance Indicators Business Intelligence enables visualization of raw data in the form of charts, graphs, pictures, etc. Typically Key Performance Indicators (KPIs), Score Cards, and Dashboards use the raw data and turn it into something that can be easily consumed by a viewer. For example, a project status KPI is commonly displayed as green, yellow or red lights to indicate that the project is on target/no issues, there are minor issues, or the project is in trouble. This BI technique is an easy way to visualize the data and cut through all the non-essential information and get to the point. This also allows the viewer to quickly gauge if the corporate goals are being met or are in jeopardy.

SharePoint 2013 Business Intelligence Solutions

SharePoint 2013 has several products that may be used as part of a BI system. The following is a list of commonly used MS components, all or just some of them can be used to create a practical and powerful BI system:

  • BI Data Services – MS SQL Server Data Services and Integration services (both used to extract, transform and load data from disparate sources)
  • BI Engine MS SQL Server Analysis Services (supports OLAP cubes by letting you design, create, and manage multidimensional structures that contain data aggregated from other data sources, such as relational databases.)
  • PowerShell (a Microsoft task automation framework, consists of a command-line shell and associated scripting language built on .NET technology)
  • PowerPivot for SharePoint (Analysis Servicess server running in SharePoint mode and provides server hosting of PowerPivot data)
  • Microsoft Excel (commonly used spreadsheet with Pivot Tables and Pivot Charts and can be used with SharePoint)
  • Microsoft Performance Point Designer (is integrated with SharePoint to create dashboards, score cards, and analytics.

Setting Up SharePoint 2013


When SharePoint 2013 is installed and configured, Central Administration (CA) is provisioned. Central Administration is where you control all the settings and features of SharePoint Product sites for Web applications, like Excel or Performance Point. CA is a convenient tool that helps in linking the applications and tools required by SharePoint to set up a BI system. You will also use Microsoft’s PowerShell to set up the infrastructure for SharePoint sites so they can run in a multi-tenant environment on a single physical server or virtual server.

Excel Services or Performance Point

You can use either or both of these tools to create dashboards. Either one will help you establish trusted locations (e.g. http:// links), data providers, libraries, and databases.
Excel is often the easiest and most familiar tool to display and analyze BI data. Since Excel has been around a long time and so many people are experienced when it comes to using Excel, it is a good choice as the front-end tool to put on your BI environment.

With Excel you can add measures and dimensions from a source data cube (created by Analysis Services) and then use the Pivot Chart capabilities in Excel to select the fields you want to display, such as sales amount, product categories, sales by geography, etc. You can also create Pivot Tables is you want to display a spreadsheet with multiple columns and rows, also using the fields from the cube.

SharePoint’s Practical Solution


Microsoft and SharePoint have all the tools you need to create a very robust and practical BI solution. It is probable that you currently own licenses to many of the components, if not all, that are required to build a solution. If you are interested in Business Intelligence and you would consider a Microsoft-based solution, you might find that you can be up and running in a matter of days with a minimal investment.

List of all Visual Studio ALM Virtual Machines

List of all Visual Studio ALM Virtual Machines

COURTESY OF THE ALM RANGERS @

 

image_thumb%25255B3%25255D[1]

Given the growing list of virtual machines we have published to showcase various application lifecycle management scenarios, I created this blog post to be a permanent location you can bookmark any time you want to find the latest and greatest. An easy-to-remember URL for this page is http://aka.ms/ALMVMs.

Visual Studio 2013 ALM Virtual Machine and Hands-on-Labs / Demo Scripts
Last Update: January 9, 2014
This virtual machine is based on the RTM release of Visual Studio 2013 and includes hands-on-labs / demo scripts which showcase the new ALM capabilities introduced in this release. This VM was also upgraded on January 9, 2014 to include the content and hands-on-labs / demo scripts for capabilities which were originally introduced in Visual Studio 2010/2012.

Visual Studio 2012 Update 2 ALM Virtual Machine and Hands-on-Labs / Demo Scripts
Last Updated: April 17, 2013
This is the primary ALM virtual machine which demonstrates many of the scenarios introduced in Visual Studio 2010/2012 for application lifecycle management. This includes project management, source control, developer productivity and collaboration, testing, lab management, and IntelliTrace.

Team Foundation Server 2012 and Project Server 2013 Integration Virtual Machine and Hands-on-Labs / Demo Scripts
Last Updated: April 17, 2013
This VM highlights the integration scenarios which are possible between Team Foundation Server and Project Server which allow development teams to automatically synchronize the status of their projects with a centralized project management office (PMO).

Team Foundation Server 2012 and System Center 2012 Operations Manager Integration Virtual Machine and Hands-on-Lab / Demo Script
Last Updated: February 7, 2013
This VM highlights the integration scenarios which are possible between System Center 2012 Operations Manager and Team Foundation Server 2012 which allow operations teams to easily surface incidents from production in a rich, actionable way for developers to quickly diagnose these problems.

SharePoint 2013 and CRM 2011 integration. A customer portal approach

A Look At : Federated Authentication

More and more organisations are looking to collaborate with partners and customers in their ecosystem to help them achieve mutual goals. SharePoint is a great tool for enabling this collaboration but many organisations are reluctant to create and maintain identities for users from other organisations just to allow access to their own SharePoint farm. It’s hardly surprising; identity management is complex and expensive.

You have to pay for servers to host your identity provider (Microsoft Active Directory if you are using Windows); you have to keep it secure; you have to back it up and ensure that it is always available, and you have to pay for someone to maintain and administer it. Identity management becomes even more complicated when your organisation wants to give external users access to SharePoint; you have to ensure that they can only access SharePoint and can’t gain access to other systems; you have to buy additional client access licenses (CALs) for each external user because by adding them to your Active Directory you are making them an internal user.

 

Imageare

Microsoft, Google and others all offer identity providers (also known as IdPs or claims providers) that are free to use, and by federating with a third party IdP you shift the ownership and management of identities on to them. You may even find that the partner or customer you are looking to collaborate with may offer their own IdP (most likely Active Directory Federation Services if they themselves run Windows). Of course, you have to trust whichever IdP you choose; they will be responsible for authenticating the user instead of you so you must be confident that they will do a good job. You must also check what pieces of information about a user (also known as claims; for example, name, email address etc) IdPs offer to ensure they can tell you enough about a user for your purposes as they don’t all offer the same.

Having introduced support for federated authentication in SharePoint 2010, Microsoft paved the way for us to federate with third party IdPs within SharePoint itself. Unfortunately, configuring SharePoint to do this is fiddly and there is no user interface for doing so (a task made more onerous if you want to federate with multiple IdPs or tweak the configuration at a later date). Fortunately Microsoft has also introduced Azure Access Control Services (ACS) which makes the process of federating with one or more IdPs simple and easy to maintain. ACS is a cloud-based service that enables you to manage the IdPs used by your applications. The following diagram illustrates, at a high-level, the components of ACS.

An ACS namespace is a container for mappings between IdPs and one or more relying parties (the applications that want to use ACS), in our case SharePoint. Associated with each mapping is a rule group with defines how the relying party handles the individual claims associated with an identity. Using rule groups you can choose to hide or expose certain claims to specific relying parties within the namespace.

So by creating an ACS namespace you are in effect creating your own unique IdP that encapsulates the configuration for federating with one or more additional IdPs. A key point to remember is that your ACS namespace can be used by other applications (relying parties) that want to share the same identities, not just SharePoint. 

Once your ACS namespace has been created you need to configure SharePoint to trust it, which most of the time will be a one off task and from that point on you can manage and maintain the IdPs you support from within ACS. The following diagram illustrates, at a high-level, the typical architecture for integrating SharePoint and ACS.

 

In the scenario above the SharePoint web application is using two different claims providers (they are referred to as claims providers in SharePoint rather than IdPs). One is for internal users and trusts an internal AD domain and another is for external users and trusts an ACS namespace.

When a user tries to access a site within the web application they will get the default SharePoint Sign In page asking them which provider they want to use.

This page can be customised and branded as required. If the user selects Windows Authentication they will get the standard authentication dialog. If they select Azure Provider (or whatever you happen to have called your claims provider) they will be redirected to your ACS Sign In page.

Again this page can be customised and branded as required. By clicking on one of the IdPs the user will be redirected to the appropriate Sign In page. Once they have been successfully authenticated by the IdP they will be redirected back to SharePoint.

 

Conclusion

By integrating SharePoint with ACS you can simplify the process of giving external users access to SharePoint. It could also save you money in licence fees and administration costs[i].

An important point to bear in mind when planning federated authentication for SharePoint is that in order for Search to be able to index content within SharePoint, you must enable Windows authentication on at least one zone within your web application. Also, if you use a reverse proxy to perform authentication, such as Microsoft Threat Management Gateway, before allowing traffic to hit your SharePoint servers, you will need to disable the authentication checks

 

[i] The licensing model for external users differs between SharePoint 2010 and SharePoint 2013. With SharePoint 2010 if you expose your farm to external users, either anonymously or not, you have to purchase a separate licence for each server. The license covers you for any number of external users and you do not need to by a CAL for each user. With SharePoint 2013, Microsoft did away with the server license for external users and you still don’t need to buy CALs for the external users.

A Look At : The importance of people in a SharePoint project

Image

As with all other sizeable new business software implementations, a successful SharePoint deployment is one that is well thought-out and carefully managed every step of the way.

However in one key respect a SharePoint deployment is different from most others in the way it should be carried out. Whereas the majority of ERP solutions are very rigid in terms of their functionality and in the nature of the business problems they solve, SharePoint is far more of a jack-of-all-trades type of system. It’s a solution that typically spreads its tentacles across several areas within an organisation, and which has several people putting in their two cents worth about what functions SharePoint should be geared to perform.

So what is the best approach? And what makes for a good SharePoint project manager?

From my experience with SharePoint implementations, I would say first and foremost that a SharePoint deployment should be approached from a business perspective, rather than from a strictly technology standpoint. A SharePoint project delivered within the allotted time and budget can still fail if it’s executed without the broader business objectives in mind. If the project manager understands, and can effectively demonstrate, how SharePoint can solve the organisation’s real-world business problems and increase business value, SharePoint will be a welcome addition to the organisation’s software arsenal.

Also crucial is an understanding of people. An effective SharePoint project manager understands the concerns, limitations and capabilities of those who will be using the solution once it’s implemented. No matter how technically well-executed your SharePoint implementation is, it will amount to little if hardly anyone’s using the system. The objective here is to maximise user adoption and engagement, and this can be achieved by maximising user involvement in the deployment process.

 

Rather than only talk to managers about SharePoint and what they want from the system, also talk to those below them who will be using the product on a day-to-day basis. This means not only collaborating with, for example, the marketing director but also with the various marketing executives and co-ordinators.

 

It means not only talking with the human resources manager but also with the HR assistant, and so on. By engaging with a wide range of (what will be) SharePoint end-users and getting them involved in the system design process, the rate of sustained user adoption will be a lot higher than it would have been otherwise.

 

An example of user engagement in action concerns a SharePoint implementation I oversaw for an insurance company. The business wanted to improve the tracking of its documentation using a SharePoint-based records management system. Essentially the system was deployed to enhance the management and flow of health insurance and other key documentation within the organisation to ensure that the company meets its compliance obligations.

 

The project was a great success, largely because we ensured that there was a high level of end-user input right from the start. We got all the relevant managers and staff involved from the outset, we began training people on SharePoint early on and we made sure the change management part of the process was well-covered.

 

Also, and very importantly, the business value of the project was sharply defined and clearly explained from the get-go. As everyone set about making the transition to a SharePoint-driven system, they knew why it was important to the company and why it was going to be good for them too.

By contrast a follow-up SharePoint project for the company some months later was not as successful. Why? Because with that project, in which the company abandoned its existing intranet and developed a new one, the business benefits were poorly defined and were not effectively communicated to stakeholders. That particular implementation was driven by the company’s IT department which approached the project from a technical, rather than a business, perspective. User buy-in was not sought and was not achieved.

 

When the SharePoint solution went live hardly anyone used it because they didn’t see why they should. No-one had educated them on that. That’s the danger when you don’t engage all your prospective system end-users throughout every phase of a SharePoint implementation project.

As can be seen, while it is of course critical that the technical necessities of a SharePoint deployment be met, that’s only part of the picture. Without people using the system, or with people using the system to less than its maximum potential, the return on your SharePoint investment will never materialize.

Comprehensive engagement with all stakeholders, that’s where the other part of the picture comes in. That’s where a return on investment, an investment of time and effort, will most assuredly be achieved.

How To : 8 Steps for a successful SharePoint Change Management

As with virtually any other significant IT implementation project, a SharePoint deployment is as dependent on people as it is on technology for its success. If your system end users are in fact not using the system, or are not using it correctly or to its full potential, you will never achieve that all ‑ important return on investment.

dsadsa

ImageGen[1]            hero-for-hire_basic-layout_600

One hundred percent adoption by users who are proficient with SharePoint and are committed to gaining the greatest value from the software should be a key objective for all SharePoint project leaders. To bring this about it is crucial to develop and execute an effective change management strategy as a key component of your SharePoint implementation.

At Professional Advantage we have had great success with our SharePoint clients by informally following a change management process that was thought up by Dr John Kotter, an American professor whose 1966 book, Leading Change, is still highly influential in the world of change management theory.

In his book Dr Kotter puts forward an eight step process that change leaders can follow to avoid failure and adjust successfully to change. These steps, which can be usefully applied to a SharePoint deployment project, are:

 

1. Create a sense of urgency

No SharePoint project will get off the ground, let alone become successful, if there is no buy-in at the executive level. Here it is important to put a strong case forward as to why the move to SharePoint is very much in the organisation’s best interests. Begin by doing lots of research. Examine, for example, the competitive disadvantages that will be suffered if no change is made. Also highlight those business functions and processes within the organisation that could be significantly improved with SharePoint. Tie the benefits of SharePoint to the organisation’s broad business goals and ongoing strategic objectives. Explain as persuasively as possible why the current situation is unsustainable and why, when it comes to moving to SharePoint, it’s a case of ‘the sooner the better.’ The stronger your business case for a SharePoint implementation, the more likely it is that it will get the green light.

 

2. Create a guiding coalition

Once you’ve received the go-ahead for the SharePoint deployment the next step is to put together a coalition of people with the power and commitment to lead the change. This team will ideally be comprised of a wide variety of motivated individuals: department managers, technical experts and those at the coalface who will be using SharePoint on a day-to-day basis should all form part of the coalition. They should also be people who have grasped the urgency of the task ahead, who understand the business goals that will be achieved with a successful implementation and who recognise that 100% user adoption is a central goal of the project.

Crucial to the success of the coalition’s efforts is that its members all work well together. As the project evolves these change-drivers will be sharing ideas, making decisions and identifying and solving problems. Team members must be able to trust each other and collaborate effectively; if this does not occur the project will almost certainly stall.

 

3. Develop a change vision

By developing a clear vision for the project you give those involved a direction to follow and a goal to achieve. Ideally the vision will be easy to comprehend, achievable, flexible and something that all stakeholders can get enthusiastic about.

While the vision will by definition be broad, the strategies that underpin it will be specific. Priorities for the project should be defined and acted upon, with priority given to ‘low hanging fruit’, ie tasks that can be easily achieved and which will deliver visible, measurable and meaningful change within the organisation. This approach will add momentum to the project by enabling stakeholders to gain a real-world perspective on the changes that are in progress and why they’re good both for the organisation and for individual SharePoint users.

 

4. Communicate the vision for buy-in

Communicating your vision and promoting the behavioural changes that will drive it are critical for a successful SharePoint deployment. This step requires a top-down communications strategy that is consistent, creative, inspiring and ongoing.

At Professional Advantage our communication strategy forms part of our SharePoint adoption plan and includes a variety of tactics designed to get staff using SharePoint, and using it properly. In the past such tactics have included SharePoint launch parties, lunch sessions, system design competitions amongst staff, social media, blogs and the putting up of posters around the office promoting the use of SharePoint. The objective here is, of course, to get users educated and engaged. The more creative you are, the better. And always keep in mind that user adoption will likely be low unless you can answer the ‘What’s in it for me’ question.

 

5. Empower broad-based action

To achieve the highest possible level of SharePoint user adoption it’s best to remove any barriers that might impede that objective. This particularly applies to the laggards, ie those who are most resistant to change and least likely to make full use of the system.

Typically this will involve removing software and other technologies that make it easy for workers to continue doing things the old way. Too often organisations include this as an afterthought, resulting in smaller and slower user adoption. Here it is important to plan from the beginning, anticipate what systems will be made redundant (or scaled down) and schedule that in to the SharePoint implementation plan.

Also important here is encouragement from above. Supported by proper ongoing training, those who will be using SharePoint need to be encouraged to step out of their comfort zone and embrace the new system.

 

6. Celebrate short-term wins

Short-term wins are essential to the success of your SharePoint deployments, as are the active celebration of these wins when they occur. The transition to a SharePoint environment is a long-term process and momentum must be maintained every step of the way. Perhaps, as a result of SharePoint, a new level of intra-office collaboration has been achieved, or the organisation has experienced dramatic time savings with particular processes, or has achieved new standards of compliance. Whatever the win, the broadcasting of it should form part of the SharePoint communications plan. If people can see how and why SharePoint is working, they will be more likely to embrace the system and, in so doing, contribute to the achievement of the organisation’s business goals.

 

7. Consolidate gains and generate more change

You’ve scored some wins and people are now comfortable using SharePoint. While that is a wonderful thing, the danger at this stage is complacency. Rather than take your foot off the accelerator it’s important to build on what’s been achieved and pursue larger, more ambitious objectives. To fully ingrain SharePoint into your organisation’s culture (and to avoid regression) ramp things up with new projects and initiatives.

 

8. Making it stick

To fully embed SharePoint into your organisation’s culture and business practices everyone needs to be on board. Just as during a life-threatening cyclone there are always some residents who refuse to heed advice to leave town, with a SharePoint deployment there will always be some who are unwilling to move. Here it is important to reinforce, and continue to celebrate, the victories that have been achieved and communicate how important it is that everyone adopt the system.

As the SharePoint project continues to evolve so too will its vision and purpose. With the right planning and execution, and with the right leadership, people will, over time, forget the old ways of doing things and fully embrace the new.

New Office 365 API VS.Net Add-In exposes Javascript Client model

You can now access the Office 365 APIs using libraries available for .NET and JavaScript. These libraries make it easier to interact with the REST APIs from the device or platform of your choice.

 

Office365

The libraries are included in the latest update for Office 365 API Tools for Visual Studio Preview. Along with the libraries, this release also brings you some key updates to the tooling experience, making it easier to interact with Office 365 services.

Client libraries

Office 365 provides REST-based APIs that enable developers to access Office resources such as calendar, contacts, mail, files, and more.

The client libraries will let you:

  • Perform authentication and discovery
  • Use the Mail, Calendar and Contacts API
  • Use the My Files and Sites API (currently .NET only, with JavaScript coming soon)
  • Use the Users and Groups API

 

You can program directly against the REST APIs to interact with Office 365, but it requires you to write and maintain code around managing authentication tokens, constructing the right urls and queries for the API you wanted to access, and perform other tasks.

By using client libraries to access the Office 365 APIs, you can reduce the complexity of the code you need to write to access the APIs. We’re providing these libraries for .NET as well as JavaScript developers for use with the just-announced multi-device hybrid applications.

Here are some examples of how easy it is access the Office 365 APIs using these libraries.

.NET C# code to authenticate and get upcoming events from your Office 365 calendar:

// Shows UI to authenticate
Authenticator = newAuthenticator();
AuthenticationInfo result = await authenticator.AuthenticateAsync("https://outlook.office365.com");

The AuthenticateAsync method will prompt for a username and password and authenticate against the specified resource url, like outlook.office365.com in this case. Once you have the authentication information, you can create a client object that serves as the base for accessing all the APIs for Exchange:


// Create a client object
ExchangeClient client =
newExchangeClient(newUri("https://outlook.office365.com/ews/odata"),
result.GetAccessToken);

Because we’re using .NET here, we get to take advantage of the native language capabilities, like LINQ, so querying the Office 365 calendar is as simple as writing a LINQ query and executing it:

// Obtain calendar event data
var eventsResults = await (from i in client.Me.Events
where i.End >= DateTimeOffset.UtcNow
select i).Take(10).ExecuteAsync();

With just those four lines of code you can start making calls to the Office 365 APIs!

We wanted to make sure that you can reach multiple device and service platforms with a consistent API, so the client libraries are portable .NET libraries, which means they also work with Android and iOS devices through Xamarin. Because authentication needs to display a UI that is different on the various platforms, we also provide platform-specific authentication libraries, which can then be used with the portable ones to provide an end-to-end experience.

For developers creating multi-device hybrid applications that target multiple device platforms through JavaScript, we also have JavaScript versions of these libraries that provide a similar experience while adopting JavaScript’s patterns and practices, such as using the promises pattern instead of await.

 

Here is the same example to authenticate and get calendar events in JavaScript:

var authContext = new O365Auth.Context();
authContext.getIdToken('https://outlook.office365.com/')
.then((function (token) {
// authentication succeeded
var client = new Exchange.Client('https://outlook.office365.com/ews/odata',
token.getAccessTokenFn('https://outlook.office365.com'));
client.me.calendar.events.getEvents().fetch()
.then(function (events) {
// get currentPage of calendar events
var myevents = events.currentPage;
}, function (reason) {
// handle error
});
}).bind(this), function (reason) {
// authentication failed
});

The flow to authenticate and create a client object is similar across .NET and JavaScript, but you’re doing it in a way that should be natural to the language.

Along with the JavaScript files for these libraries, we are also including the TypeScript type definition (.d.ts)—in case you choose to develop your apps in TypeScript.

As you get started using these libraries, there are a few things to keep in mind. This is a very early preview release of the libraries that is meant to prove out the concept and get feedback on it. The libraries do not currently cover all the APIs provided by the services and some of the APIs in the library may not work. The APIs in the libraries themselves will definitely change in future updates.

Note that while we tend to call these “client” libraries, these also work with .NET server technologies like Asp.Net Web Forms and MVC, so you really get to target the breadth of the .NET platform.

 

Tooling updates

With today’s update of our Office 365 API Tools for Visual Studio 2013, the tool displays the available Office 365 services that you can add to your project. Once you’ve signed in with your Office 365 credentials, adding a service to your project is as easy as selecting the appropriate service and applying the required permissions.

dotnetvisualstudioupdate_01

Once you submit the changes, Visual Studio performs the following:

  1. Registers an application (if there isn’t an application registered yet) in Microsoft Azure Active Directory to consume Office 365 services.
  2. Adds the following to the project:
    1. Client libraries from Nuget for the configured services.
    2. Sample code files that use the Client Libraries.

Project types supported

With the broad reach of the client libraries, the Office 365 API tool is now available for a variety of project types (client, desktop, and web) in Visual Studio. Here’s are all the project types supported with the May update:

  • .NET Windows Store Apps
  • Windows Forms Application
  • WPF Application
  • ASP.NET MVC Web Application
  • ASP.NET Web Forms Application
  • Xamarin Android and iOS Applications
  • Multi-device hybrid apps

Installing the latest update

To install the latest update, you can either:

  • Check for updates within Visual Studio. To do so, follow these steps:
    1. In Visual Studio menu, click Tools->Extensions and Updates->Updates.
    2. You should see the update available for Office 365 API Tools.
    3. Click Update to update to the latest version.

–OR–

  • Download the extension and install it manually.

Once you’ve updated, you can invoke the Office 365 API tool as usual, that is, by going to your project node in the Solution Explorer and selecting Add->Connected Service from the context menu.

Looking forward to seeing your Apps out there when I visit the stores!!


MSDN references

Check also new SharePoint Online Solution Pack for branding and provisioning. This package contains also some examples, which originates from the AMS reference implementations. Here’s the direct links for the Solution Pack

You can find introduction to this SharePoint Online Solution Pack for branding and provisioning from following blog post – Introduction to SharePoint Online Solution Pack for branding and provisioning released.

The “Hybrid” SharePoint Online Model

Hybrid

The hybrid approach is not merging information from two different site collections into one. Or making sure an on-premise document library has the same content as the document library in an online environment. So what does hybrid technically mean then? It basically means we have two separate environments that act and operate completely independent of each other.

SharePointOnline
SharePointOnline

 

Even the SharePoint service applications such as the user profile service, managed metadata service, and search cannot be shared between the on-premises farm(s) and SharePoint Online environment. Instead, administrators should choose to either fully deploy a service application in only one location, or configure an instance of the service in each environment. But still there are ways to integrate functionality between the two environments.

The idea is that you first segment the different workloads from SharePoint across the on-premise and online environment. You often see that the commodity services like collaboration on team sites, news sites, projects sites and so on are stored in the Online environment, while the more advanced scenario’s often remain on-premise (think of BI capabilities, Fast Search or advanced custom solutions).

 

So where does the hybrid word come from then? It basically means that we stitch these two environments together using the same look and feel, so that the end users have a complete transparent and rich experience and do not notice the difference between working in the on-premise environment or in the online environment. They can only see the difference by looking at the URL.

Single Sign On

In order to have such a complete transparent and rich experience from an end user perspective, it is important that the end users only need to authenticate once. This can be accomplished by implementing and configuring single sign on. Once this has been set up there is a trust relationship between the on-premise and online environment. This will make sure that if the end users that already authenticated in the on-premise environment (Active Directory), don’t need to re-enter their password in the online environment. So navigating between the on-premise and online environment will be transparent without password prompts. Should you require more information on how this technology exactly works or need more information on how to implement it, please see the following links:

 

How Single Sign-On Works in Office 365
http://community.office365.com/en-us/w/sso/727.aspx

Prepare for Single Sign on:
http://onlinehelp.microsoft.com/en-us/office365-enterprises/ff652540.aspx

Plan for and deploy Active Directory Federation Services 2.0 for use with single sign-on
http://onlinehelp.microsoft.com/en-us/office365-enterprises/ff652539.aspx

Single sign-on: Roadmap
http://onlinehelp.microsoft.com/en-us/office365-enterprises/hh125004.aspx

Deploying and Configuring ADFS 2.0
http://www.youtube.com/watch?v=fwHIKlAPV0g

Questions about Single Sign On (SSO) with Office 365 for Education
http://blogs.technet.com/b/educloud/archive/2011/09/23/questions-about-single-sign-on-sso-with-office-365-for-education.aspx

Video Screencast: Complete setup details for federated identity access from on-premise AD to Office 365
http://blogs.msdn.com/b/plankytronixx/archive/2011/01/24/video-screencast-complete-setup-details-for-federated-identity-access-from-on-premise-ad-to-office-365.aspx

Branding

So how do we give these two environments the same look and feel (branding), so that the end user doesn’t notice the difference? This is not as simple as it sounds. In order to make the environments look and feel the same, you would need to design and apply the same master pages, use the same icons, images and style sheets. Next to that you need to make sure the global navigation of both environments will integrate seamlessly by linking to each other’s environment.

clip_image001

More detailed information and things to consider when branding a SharePoint Online environment can be found here.

Search

Search is one area which has some integration capabilities. Thought the integration is not ideal, as we can’t share the relevance of the search results between the two environments. But what we can do is to have either two search boxes, one for on-premise content and one for the online content, or use federated search. With federated search you can do one search query, but get two separated results from two difference content sources showing up in two separate result sets. Below is a screenshot of search results from SharePoint and search results from Bing.

clip_image001[6]

Obviously you can customize the search results page and its layout so that it will fit your needs. Bear in mind though, that you can only setup federated search in an on-premise environment and is not available in the Online environment (see also the Microsoft SharePoint Online for Enterprises Service Description). More info about the search integration capabilities can be found in the whitepaper “Hybrid SharePoint Environments with Office 365”.

 

 

User profile

A user’s my site and my profile should exist in a single environment only to ensure that there is a single correct and complete source of user data. Although the user profile service cannot be shared between environments, it is possible to link on-premises SharePoint User Profiles to Office 365 and vice versa. So whichever environment a user is currently browsing, if they access their own or another user’s profile, it will redirect to the environment that is hosting the service. More information on how to implement user profiles and my sites in a hybrid environment can be found in the whitepaper “Hybrid SharePoint Environments with Office 365”.

 

Business Connectivity Services

Since the November update of SharePoint Online, we can connect to Line Of Business (LOB) data stored in either your on-premise environment or in Azure using the Business Connectivity Services (BCS) component. As long as you have your LOB application exposed to the web, you should be able to hookup the data into SharePoint Online. For more information about BCS in SharePoint Online, please see the following resources:

Introduction to Business Connectivity Services in SharePoint Online
http://msdn.microsoft.com/en-us/library/hh412217.aspx

What’s New for BCS in SharePoint Online
http://msdn.microsoft.com/en-us/library/hh418045.aspx

SharePoint Online Developer Resource Center
http://msdn.microsoft.com/en-us/sharepoint/gg153540.aspx

 

 

 

Integrating other components

Though it can be challenging to accomplish forms of integration for other SharePoint components between the two environments, there are techniques and strategies to take into account when you are planning and designing for a hybrid environment. A lot more detail about these techniques and strategies can be found in a blog post soon to follow on the power of Prointsm in SharePi

A look at – The Architecture of the Microsoft Analytics Platform System

Architecture of the Microsoft Analytics Platform System

In April 2014, Microsoft announced the Analytics Platform System (APS) as Microsoft’s “Big Data in a Box” solution for addressing this question. APS is an appliance solution with hardware and software that is purpose built and pre-integrated to address the overwhelming variety of data while providing customers the opportunity to access this vast trove of data. The primary goal of APS is to enable the loading and querying of terabytes and even petabytes of data in a performant way using a Massively Parallel Processing version of Microsoft SQL Server (SQL Server PDW) and Microsoft’s Hadoop distribution, HDInsight, which is based off of the Hortonworks Data Platform.

Basic Design

An APS solution is comprised of three basic components:

  1. The hardware – the servers, storage, networking and racks.
  2. The fabric – the base software layer for operations within the appliance.
  3. The workloads – the individual workload types offering structured and unstructured data warehousing.

The Hardware

Utilizing commodity servers, storage, drives and networking devices from our three hardware partners (Dell, HP, and Quanta), Microsoft is able to offer a high performance scale out data warehouse solution that can grow to very large data sets while providing redundancy of each component to ensure high availability. Starting with standard servers and JBOD (Just a Bunch Of Disks) storage arrays, APS can grow from a simple 2 node and storage solution to 60 nodes. At scale, that means a warehouse that houses 720 cores, 14 TB of RAM, 6PB of raw storage and ultra-high speed networking using Ethernet and InfiniBand networks while offering the lowest price per terabyte of any data warehouse appliance on the market (Value Prism Consulting).

Fabric

The fabric layer is built using technologies from the Microsoft portfolio that enable rock solid reliability, management and monitoring without having to learn anything new. Starting with Microsoft Windows Server 2012, the appliance builds a solid foundation for each workload by providing a virtual environment based on Hyper-V that also offers high availability via Failover Clustering all managed by Active Directory. Combining this base technology with Clustered Shared Volumes (CSV) and Windows Storage Spaces, the appliance is able to offer a large and expandable base fabric for each of the workloads while reducing the cost of the appliance by not requiring specialized or proprietary hardware. Each of the components offers full redundancy to ensure high-availability in failure cases.

Workloads

Building upon the fabric layer, the current release of APS offers two distinct workload types – structure data through SQL Server Parallel Data Warehouse (PDW) and unstructured data through HDInsight (Hadoop). These workloads can be mixed within a single appliance offering flexibility to customers to tailor the appliance to the needs of their business.

SQL Server Parallel Data Warehouse is a massively parallel processing, shared nothing scale-out solution for Microsoft SQL Server that eliminates the need to ‘forklift’ additional very large and very expensive hardware into your datacenter to grow as the volume of data exhaust into your warehouse increases. Instead of having to expand from a large multi-processor and connected storage system to a massive multi-processor and SAN based solution, PDW uses the commodity hardware model with distributed execution to scale out to a wide footprint. This scale wide model for execution has been proven as a very effective and economical way to grow your workload.

HDInsight is Microsoft’s offering of Hadoop for Windows based on the Hortonworks Data Platform from Hortonworks. See the HDInsight portal for details on this technology. HDInsight is now offered as a workload on APS to allow for on premise Hadoop that is optimized for data warehouse workloads. By offering HDInsight as a workload on the appliance, the pressure to define, construct and manage a Hadoop cluster has been minimized. Any by using PolyBase, Microsoft’s SQL Server to HDFS bridge technology, customers can not only manage and monitor Hadoop through tools they are familiar with but they can for the first time use Active Directory to manage security into the data stored within Hadoop – offering the same ease of use for user management offered in SQL Server.

Massively-Parallel Processing (MPP) in SQL Server

Now that we’ve laid the groundwork for APS, let’s dive into how we load and process data at such high performance and scale. The PDW region of APS is a scale-out version of SQL Server that enables parallel query execution to occur across multiple nodes simultaneously. The effect is the ability to run what appears to be a very large operation into tasks that can be managed at a smaller scale. For example, a query against 100 billion rows in a SQL Server SMP environment would require the processing of all of the data in a single execution space. With MPP, the work is spread across many nodes to break the problem into more manageable and easier ways to execute tasks. In a four node appliance (see the picture below), each node is only asked to process roughly 25 billion rows – a much quicker task.

To accomplish such a feat, APS relies on a couple of key components to manage and move data within the appliance – a table distribution model and the Data Movement Service (DMS).

The first is the table distribution model that allows for a table to be either replicated to all nodes (used for smaller tables such as language, countries, etc.) or to be distributed across the nodes (such as a large fact table for sales orders or web clicks). By replicating small tables to each node, the appliance is able to perform join operations very quickly on a single node without having to pull all of the data to the control node for processing. By distributing large tables across the appliance, each node can process and return a smaller set of data returning only the relevant data to the control node for aggregation.

To create a table in APS that is distributed across the appliance, the user simply needs to add the key to which the table is distributed on:

CREATE TABLE [dbo].[Orders]
(
  [OrderId] ...
)
WITH
(
  DISTRIBUTION = HASH([OrderId])
)

This allows the appliance to split the data and place incoming data onto the appropriate node onto the appropriate node in the appliance.

The second component is the Data Movement Service (DMS) that manages the routing of data within the appliance. DMS is used in partnership with the SQL Server query (which creates the execution plan) to distribute the execution plan to each node. DMS then aggregates the results back to the control node of the appliance which can perform any final execution before returning the results to the caller. DMS is essentially the traffic cop within APS that enables queries to be executed and data moved within the appliance across 2-60 nodes.

Performance

With the introduction of Clustered Column Indexes (CCI) in SQL Server, APS is able to take advantage of the performance gains to better process and store data within the appliance. In typical data warehouse workloads, we commonly see very wide table designs to eliminate the need to join tables at scale (to improve performance). The use of Clustered Column Indexes allows SQL Server to store data in columnar format versus row format. This approach enables queries that don’t utilize all of the columns of a table to more efficiently retrieve the data from memory or disk for processing – increasing performance.

By combining CCI tables with parallel processing and the fast processing power and storage systems of the appliance, customers are able to improve overall query performance and data compression quite significantly versus a traditional single server data warehouse. Often times, this means reductions in query execution times from many hours to a few minutes or even seconds. The net results is that companies are able to take advantage of the exhaust of structured or non-structured data at real or near real-time to empower better business decisions.

Features from SharePoint 2010 Integration with SAP BusinessObjects BI 4.0

ImageOne of the core concepts of Business Connectivity Services (BCS) for SharePoint 2010 are the external content types. They are reusable metadata descriptions of connectivity information and behaviours (stereotyped operations) applied to external data. SharePoint offers developers several ways to create external content types and integrate them into the platform.

 

The SharePoint Designer 2010, for instance, allows you to create and manage external content types that are stored in supported external systems. Such an external system could be SQL Server, WCF Data Service, or a .NET Assembly Connector.

This article shows you how to create an external content type for SharePoint named Customer based on given SAP customer data. The definition of the content type will be provided as a .NET assembly, and the data are displayed in an external list in SharePoint.

The SAP customer data are retrieved from the function module SD_RFC_CUSTOMER_GET. In general, function modules in a SAP R/3 system are comparable with public and static C# class methods, and can be accessed from outside of SAP via RFC (Remote Function Call). Fortunately, we do not need to program RFC calls manually. We will use the very handy ERPConnect library from Theobald Software. The library includes a LINQ to SAP provider and designer that makes our lives easier.

.NET Assembly Connector for SAP

The first step in providing a custom connector for SAP is to create a SharePoint project with the SharePoint 2010 Developer Tools for Visual Studio 2010. Those tools are part of Visual Studio 2010. We will use the Business Data Connectivity Model project template to create our project:

After defining the Visual Studio solution name and clicking the OK button, the project wizard will ask what kind of SharePoint 2010 solution you want to create. The solution must be deployed as a farm solution, not as a sandboxed solution. Visual Studio is now creating a new SharePoint project with a default BDC model (BdcModel1). You can also create an empty SharePoint project and add a Business Data Connectivity Model project item manually afterwards. This will also generate a new node to the Visual Studio Solution Explorer called BdcModel1. The node contains a couple of project files: The BDC model file (file extension bdcm), and the Entity1.cs and EntityService.cs class files.

Next, we add a LINQ to SAP file to handle the SAP data access logic by selecting the LINQ to ERP item from the Add New Item dialog in Visual Studio. This will add a file called LINQtoERP1.erp to our project. The LINQ to SAP provider is internally called LINQ to ERP. Double click LINQtoERP1.erp to open the designer. Now, drag the Function object from the designer toolbox onto the design surface. This will open the SAP connection dialog since no connection data has been defined so far:

Enter the SAP connection data and your credentials. Click the Test Connection button to test the connectivity. If you could successfully connect to your SAP system, click the OK button to open the function module search dialog. Now search for SD_RFC_CUSTOMER_GET, then select the found item, and click OK to open the RFC Function Module /BAPI dialog:

SP2010SAPToBCS/BCS12.png

The dialog provides you the option to define the method name and parameters you want to use in your SAP context class. The context class is automatically generated by the LINQ to SAP designer including all SAP objects defined. Those objects are either C# (or VB.NET) class methods and/or additional object classes used by the methods.

For our project, we need to select the export parameters KUNNR and NAME1 by clicking the checkboxes in the Pass column. These two parameters become our input parameters in the generated context class method named SD_RFC_CUSTOMER_GET. We also need to return the customer list for the given input selection. Therefore, we select the table parameter CUSTOMER_T on the Tables tab and change the structure name to Customer. Then, click the OK button on the dialog, and the new objects get added to the designer surface.

IMPORTANT: The flag “Create Objects Outside Of Context Class” must be set to TRUE in the property editor of the LINQ designer, otherwise LINQ to SAP generates the Customer class as nested class of the SAP context class. This feature and flag is only available in LINQ to SAP for Visual Studio 2010.

The LINQ designer has also automatically generated a class called Customer within the LINQtoERP1.Designer.cs file. This class will become our BDC model entity or external content type. But first, we need to adjust and rename our BDC model that was created by default from Visual Studio. Currently, the BDC model looks like this:

Rename the BdcModel1 node and file into CustomerModel. Since we already have an entity class (Customer), delete the file Entity1.cs and rename the EntityService.cs file to CustomerService.cs. Next, open the CustomerModel file and rename the designer object Entity1. Then, change the entity identifier name from Identifier1 to KUNNR. You can also use the BDC Explorer for renaming. The final adjustment result should look as follows:

SP2010SAPToBCS/BCS4.png

The last step we need to do in our Visual Studio project is to change the code in the CustomerService class. The BDC model methods ReadItem and ReadList must be implemented using the automatically generated LINQ to SAP code. First of all, take a look at the code:

SP2010SAPToBCS/BCS6.png

As you can see, we basically have just a few lines of code. All of the SAP data access logic is encapsulated within the SAP context class (see the LINQtoERP1.Designer.cs file). The CustomerService class just implements a static constructor to set the ERPConnect license key and to initialize the static variable _sc with the SAP credentials as well as the two BDC model methods.

The ReadItem method, BCS stereotyped operation SpecificFinder, is called by BCS to fetch a specific item defined by the identifier KUNNR. In this case, we just call the SD_RFC_CUSTOMER_GET context method with the passed identifier (variable id) and return the first customer object we get from SAP.

The ReadList method, BCS stereotyped operation Finder, is called by BCS to return all entities. In this case, we just return all customer objects the SD_RFC_CUSTOMER_GET context method returns. The returned result is already of type IEnumerable<Customer>.

The final step is to deploy the SharePoint solution. Right-click on the project node in Visual Studio Solution Explorer and select Deploy. This will install and deploy the SharePoint solution on the server. You can also debug your code by just setting a breakpoint in the CustomerService class and executing the project with F5.

That’s all we have to do!

Now, start the SharePoint Central Administration panel and follow the link “Manage Service Applications”, or navigate directly to the URL http://<SERVERNAME>/_admin/ServiceApplications.aspx. Click on Business Data Connectivity Service to show all the available external content types:

On this page, we find our deployed BDC model including the Customer entity. You can click on the name to retrieve more details about the entity. Right now, there is just one issue open. We need to set permissions!

Mark the checkbox for our entity and click on Set Object Permissions in the Ribbon menu bar. Now, define the permissions for the users you want to allow to access the entity, and click the OK button. In the screen shown above, the user administrator has all the permissions possible.

In the next and final step, we will create an external list based on our entity. To do this, we open SharePoint Designer 2010 and connect us with the SharePoint website.

Click on External Content Types in the Site Objects panel to display all the content types (see above). Double click on the Customer entity to open the details. The SharePoint Designer is reading all the information available by BCS.

In order to create an external list for our entity, click on Create Lists & Form on the Ribbon menu bar (see screenshot below) and enter CustomerList as the name for the external list.

OK, now we are done!

Open the list, and you should get the following result:

The external list shows all the defined fields for our entity, even though our Customer class, automatically generated by the LINQ to SAP, has more than those four fields. This means you can only display a subset of the information for your entity.

Another option is to just select those fields required within the LINQ to SAP designer. With the LINQ designer, you can access not just the SAP function modules. You can integrate other SAP objects, like tables, BW cubes, SAP Query, or IDOCs. A demo version of the ERPConnect library can be downloaded from the Theobald Software homepage.

If you click the associated link of one of the customer numbers in the column KUNNR (see screenshot above), SharePoint will open the details view:

SP2010SAPToBCS/BCS10.png

 

 

How To : Make use of Application Insights (with a great VS extension available freely)

You can now add Application Insights telemetry right from Visual Studio to new or existing projects in 2 clicks or less.

This release of Application Insights Tools for Visual Studio is in PREVIEW; see Known Issues below.

Get Started

To get started with a new project, simply create a Web, Windows Store 8.1, or Windows Phone 8.0 project. In the New Project dialog, make sure that Add Application Insights to Project is checked.

To get started with an existing project, right-click on a Web, Windows Store 8.1, or Windows Phone 8.0 project in Solution Explorer and choose Add Application Insights Telemetry to Project.

That’s it! Then run your Web application locally (or deploy your application), and after 10-15 minutes, telemetry data will automatically start appearing in the Application Insights Portal in the Usage tab.

Additional project types are supported with partial automation (see Known Issues below)

Q & A

Q: I don’t see the Add Application Insights Telemetry to Project command.

A: The type of app you’re creating is not supported yet. Instead, add your application at the Application Insights portal.

Q: Oops. I closed the Application Insights browser. How do I get it back?

A: In Solution Explorer, in the context menu of the project, choose Open Application Insights Portal…

Q: What other telemetry can I log from my app?

A: You can log events and metrics. Take a look at Improve your application from live usage data

Known Issues

This release of Application Insights Tools for Visual Studio is in PREVIEW. There are a number of known issues, including:

  • If you are upgrading from version 0.6.56.3 of the Application Insights Telemetry SDK for Services using the Application Insights Tools for Visual Studio, you will need to manually copy any custom settings from Monitoring.CollectionPlan.config to ApplicationInsights.config file.
  • For Windows Store 8.1 C++ projects instrumented with Application Insights, updates for the “Application Insights Telemetry SDK for Windows Store Apps” from nuget.org do not show up in the “Manage NuGet Packages” dialog. You can install an updated nuget package from the “Online” section of the “Manage NuGet Packages” dialog. Or, you can execute this command in the Package Manager Console: update-package Microsoft.ApplicationInsights.Telemetry.WindowsStore

DRY Architecture, Layered Architecture, Domain Driven Design and a Framework to build great Single Web Pages – BiolerPlate Part 1

DRY – Don’t Repeat Yourself! is one of the main ideas of a good developer while developing a software. We’re trying to implement it from simple methods to classes and modules. What about developing a new web based application? We, software developers, have similar needs when developing enterprise web applications.

Enterprise web applications need login pages, user/role management infrastructure, user/application setting management, localization and so on. Also, a high quality and large scale software implements best practices such as Layered Architecture, Domain Driven Design (DDD), Dependency Injection (DI). Also, we use tools for Object-Releational Mapping (ORM), Database Migrations, Logging… etc. When it comes to the User Interface (UI), it’s not much different.

Starting a new enterprise web application is a hard work. Since all applications need some common tasks, we’re repeating ourselves. Many companies are developing their own Application Frameworks or Libraries for such common tasks to do not re-develop same things. Others are copying some parts of existing applications and preparing a start point for their new application. First approach is pretty good if your company is big enough and has time to develop such a framework.

As a software architect, I also developed such a framework im my company. But, there is some point it feels me bad: Many company repeats same tasks. What if we can share more, repeat less? What if DRY principle is implemented universally instead of per project or per company? It sounds utopian, but I think there may be a starting point for that!

What is ASP.NET Boilerplate?

http://www.aspnetboilerplate.com/

ASP.NET Boilerplate [1] is a starting point for new modern web applications using best practices and most popular tools. It’s aimed to be a solid model, a general-purpose application framework and a project template. What it does?

  • Server side
    • Based on latest ASP.NET MVC and Web API.
    • Implements Domain Driven Design (Entities, Repositories, Domain Services, Application Services, DTOs, Unif Of Work… and so on)
    • Implements Layered Architecture (Domain, Application, Presentation and Infrastructure Layers).
    • Provides an infrastructure to develop reusable and composable modules for large projects.
    • Uses most popular frameworks/libraries as (probably) you’re already using.
    • Provides an infrastructure and make it easy to use Dependency Injection (uses Castle Windsor as DI container).
    • Provides a strict model and base classes to use Object-Releational Mapping easily (uses NHibernate, can work with many DBMS).
    • Implements database migrations (uses FluentMigrator).
    • Includes a simple and flexible localization system.
    • Includes an EventBus for server-side global domain events.
    • Manages exception handling and validation.
    • Creates dynamic Web API layer for application services.
    • Provides base and helper classes to implement some common tasks.
    • Uses convention over configuration principle.
  • Client side
    • Provides two project templates. One for Single-Page Applications using Durandaljs, other one is a Multi-Page Application. Both templates uses Twitter Bootstrap.
    • Most used libraries are included by default: Knockout.js, Require.js, jQuery and some useful plug-ins.
    • Creates dynamic javascript proxies to call application services (using dynamic Web API layer) easily.
    • Includes unique APIs for some sommon tasks: showing alerts & notifications, blocking UI, making AJAX requests.

Beside these common infrastructure, the “Core Module” is being developed. It will provide a role and permission based authorization system (implementing ASP.NET Identity Framework), a setting systems and so on.

What ASP.NET Boilerplate is not?

ASP.NET Boilerplate provides an application development model with best practices. It has base classes, interfaces and tools that makes easy to build maintainable large-scale applications. But..

  • It’s not one of RAD (Rapid Application Development) tools those provide infrastructure for building applications without coding. Instead, it provides an infrastructure to code in best practices.
  • It’s not a code generation tool. While it has several features those build dynamic code in run-time, it does not generate codes.
  • It’s not a all-in-one framework. Instead, it uses well known tools/libraries for specific tasks (like NHibernate for O/RM, Log4Net for logging, Castle Windsor as DI container).

Getting started

In this article, I’ll show how to deleveop a Single-Page and Responsive Web Application using ASP.NET Boilerplate (I’ll call it as ABP from now). This sample application is named as “Simple Task System” and it consists of two pages: one for list of tasks, other one is to add new tasks. A Task can be related to a person, can be completed. The application is localized in two languages. Screenshot of Task List in the application is shown below:

A screenshot of 'Simple Task System'

Creating empty web application from template

ABP provides two templates to start a new project (Even if you can manually create your project and get ABP packages from nuget, template way is much more easy). Go to www.aspnetboilerplate.com/Templates to create your application from one of twotemplates (one for SPA (Single-Page Application), one for MPA (classic, Multi-Page Application) projects):

Creating template from ABP web site

I named my project as SimpleTaskSystem and created a SPA project. It downloaded project as a zip file. When I open the zip file, I see a solution is ready that contains assemblies (projects) for each layer of Domain Driven Design:

Project files

Created project’s runtime is .NET Framework 4.5.1, I advice to open with Visual Studio 2013. The only prerequise to be able to run the project is to create a database. SPA template assumes that you’re using SQL Server 2008 or later. But you can change it easily to another DBMS.

See the connection string in web.config file of the web project:

<add name="MainDb" connectionString="Server=localhost; Database=SimpleTaskSystemDb; Trusted_Connection=True;" />

You can change connection string here. I don’t change the database name, so I’m creating an empty database, named SimpleTaskSystemDb, in SQL Server:

Empty database

That’s it, your project is ready to run! Open it in VS2013 and press F5:

First run

Template consists of two pages: One for Home page, other is About page. It’s localized in English and Turkish. And it’s Single-Page Application! Try to navigate between pages, you’ll see that only the contents are changing, navigation menu is fixed, all scripts and styles are loaded only once. And it’s responsive. Try to change size of the browser.

Now, I’ll show how to change the application to a Simple Task System application layer by layer in the coming part 2

Microsoft BI and the new PowerQuery for Excel – How we empower users

Introduction to Microsoft Power Query for Excel

Microsoft Power Query for Excel enhances self-service business intelligence (BI) for Excel with an intuitive and consistent experience for discovering, combining, and refining data across a wide variety of sources including relational, structured and semi-structured, OData, Web, Hadoop, Azure Marketplace, and more. Power Query also provides you with the ability to search for public data from sources such as Wikipedia.

With Power Query 2.10, you can share and manage queries as well as search data within your organization. Users in the enterprise can find and use these shared queries (if it is shared with them) to use the underlying data in the queries for their data analysis and reporting. For more information about how to share queries, see Share Queries.

With Power Query, you can

  • Find and connect data across a wide variety of sources.
  • Merge and shape data sources to match your data analysis requirements or prepare it for further analysis and modeling by tools such as Power Pivot and Power View.
  • Create custom views over data.
  • Use the JSON parser to create data visualizations over Big Data and Azure HDInsight.
  • Perform data cleansing operations.
  • Import data from multiple log files.
  • Perform Online Search for data from a large collection of public data sources including Wikipedia tables, a subset of Microsoft Azure Marketplace, and a subset of Data.gov.
  • Create a query from your Facebook likes that render an Excel chart.
  • Pull data into Power Pivot from new data sources, such as XML, Facebook, and File Folders as refreshable connections.
  • With Power Query 2.10, you can share and manage queries as well as search data within your organization.

New updates for Power Query

The Power Query team has been busy adding a number of exciting new features to Power Query. You can download the update from this page.

New features for Power Query include the following, please read the rest of this blog post for specific details for each.

  • New Data Sources
    • Updated “Preview” functionality of the SAP BusinessObjects BI Universe connectivity
    • Access tables and named ranges in a workbook
  • Improvements to Query Load Settings
    • Customizable Defaults for Load Settings in the Options dialog
    • Automatic suggestion to load a query to the Data Model when it goes beyond the worksheet limit
    • Preserve data in the Data Model when you modify the Load to Worksheet setting of a query that is loaded to the Data Model
  • Improvements to Query Refresh behaviors in Excel
    • Preserve Custom Columns, Conditional Formatting and other customizations of worksheet tables
    • Preserve results from a previous query refresh when a new refresh attempt fails
  • New Transformations available in the Query Editor
    • Remove bottom rows
    • Fill up
    • New statistic operations in the Insert tab
  • Other Usability Improvements
    • Ability to reorder queries in the Workbook Queries pane
    • More discoverable way to cancel a preview refresh in the Query Editor
    • Keyboard support for navigation and rename in the Steps pane
    • Ability to view and copy errors in the Filter Column dropdown menu
    • Remove items directly from the Selection Well in the Navigator
    • Send a Frown for Service errors

Connect to SAP BusinessObjects BI Universe (Preview)

This connectivity has been a separate Preview feature for the last month or so. In this release, we are incorporating the SAP BusinessObjects BI Universe connector Preview capabilities as part of the main Power Query download for ease of access. With Microsoft Power Query for Excel, you can easily connect to an SAP BusinessObjects BI Universe to explore and analyze data across your enterprise.

Access tables and named ranges in an Excel workbook

With From Excel Workbook, you can now connect to tables and named ranges in your external workbook sheets. This simplifies the process of selecting useful data from an external workbook, which used to be limited to sheets and users had to “manually” scrape the data (using Query transform operations).

 

Customizable Defaults for Load Settings in the Options dialog

You can override the Power Query default Load Settings in the Options dialog. This will set the default Load Settings behavior for new queries in areas where Load Settings are not exposed directly to the user, such as in Online Search results and the Navigator task pane in single-table import mode. In addition, this will set the default state for Load Settings where these settings are available including the Query Editor, and Navigator in multi-table import mode.

           

Preserve Custom Columns, Conditional Formatting and other customizations of worksheet tables

With this Power Query Update, Custom Columns, conditional formatting in Excel, and other customizations of worksheet tables are preserved after you refresh a query. Power Query will preserve worksheet customizations such as Data Bars, Color Scales, Icon Sets or other value-based rules across refresh operations and after query edits.

Preserve results from a previous query refresh when a new refresh attempt fails

After a refresh fails, Power Query will now preserve the previous query results. This allows you to work with slightly older data in the worksheet or Data Model and lets you refresh the query results after fixing the cause of errors.

Automatic suggestion to load a query to the Data Model when it goes beyond the worksheet limits

When you are working with large volumes of data in your workbook, you could reach the limits of Excel’s worksheet size. When this occurs, Power Query will automatically recommend to load your query results to the Data Model. The Data Model can store very large data sets.

Preserve data in the Data Model when modifying the Load to Worksheet setting of a query that is loaded to the Data Model

With Power Query, data and annotations on the Data Model are preserved when modifying the Load to Worksheet setting of a query. Previously, Power Query would reset the query results in both the worksheet and the Data Model when modifying either one of the two load settings.      

Remove Bottom Rows

A very common scenario, especially when importing data from the Web and other semi-structured sources, is having to remove the last few rows of data because the contents do not belong to the data set. For instance, it’s common to remove links to previous/next pages or comments. Previously, this was possible only by using a composition of custom formulas in Power Query. This transformation is now much easier by adding a library function called Table.RemoveLastN(), and a button for this transformation in the Home tab of the Query Editor ribbon.

 

Fill Up

Power Query already supports the ability to fill down values in a column to neighboring empty cells. Starting with this update, you can now fill values up within a column as well. This new transformation is available as a new library function called Table.FillUp(), and a button on the Home tab of the Query Editor ribbon.

New Statistics operations in the Insert tab

The Insert tab provides various ways to insert new columns in queries, based on custom formulas or by deriving values based on other columns. You can now apply Statistics operations based on values from different columns, row by row, in their table.

 

Ability to reorder queries in the Workbook Queries pane

With the latest Power Query update, you can move queries up or down in the Workbook Queries pane. You can right-click on a query and select Move Up or Move Down to reorder queries.

More discoverable way of cancelling refresh of a preview in the Query Editor

The Cancel option is now much more discoverable inside the Query Editor dialog. In addition to the Refresh dropdown menu in the ribbon, this option can now be found in the status bar at the bottom right corner of the Query Editor, next to the download status information.

  

Keyboard support for navigation and rename in the Steps pane

You can now use the Up/Down Arrow keys to navigate between steps in your query. Also, press the F2 key to rename the current step.

Ability to view and copy errors in the Filter Column dropdown menu

You can easily view and copy error details inside the Filter Column menu. This is very useful to troubleshoot errors while retrieving filter values.

Remove items directly from the Selection Well in the Navigator

You can remove items directly from the Selection Well instead of having to find the original item in the Navigator tree to deselect it.

 

Send a Frown for Service errors

We try as hard as possible to improve the quality of Power Query and all of its features. Even then, there are cases in which errors can happen. You can now send a frown directly from experiences where a service error happened, for instance, an error retrieving a Search result preview or downloading a query from the Data Catalog. This will give us enough information about the service request that failed and the client state to troubleshoot the issue.

That’s all for this update! We hope that you enjoy these new Power Query features. Please don’t hesitate to contact us via the Power Query Forum or send us a smile/frown with any questions or feedback that you may have.

You can also follow these links to access more resources about Power Query and Power BI:

Introduction to Cloud Automation


Provision Azure Environment Resources


This is where we can see proof of evolution.

As you saw in the bulleted list of chronological blog posts (above), my first venture into Automating the Public Cloud leveraged Orchestrator + The Integration Pack for Windows Azure. My second releaseleveraged PowerShell and PowerShell Workflow + Windows Azure Cmdlets.

Let’s get down to the goods. And actually, for the first time in a long time, my published example came out a couple days before the blog post / teaser!


Script Center Contribution and Download

The download is the example: New-AzureEnvironmentResources.ps1

Here is a brief description:

This runbook creates a number of Azure Environment Resources (in sequence): Azure Affinity Group, Azure Cloud Service, Azure Storage Account, Azure Storage Container, Azure VM Image, and Azure VM. It also requires the Upload of a VHD to a specified storage container mid-process.

A detained Description, full set of Requirements, and the actual Runbook Contents are available within the Script Center Contribution (not to mention, the actual download).

Download the Provision Azure Environment Resources Example Runbook from Script Center here:

BC-DLButtonDark


A bit more about the Requirements…

Runbook Parameters

  • Azure Connection Name

    REQUIRED. Name of the Azure connection setting that was created in the Automation service.
        This connection setting contains the subscription id and the name of the certificate setting that
        holds the management certificate. It will be passed to the required and nested Connect-Azure runbook.

  • Project Name

    REQUIRED. Name of the Project for the deployment of Azure Environment Resources. This name is leveraged
        throughout the runbook to derive the names of the Azure Environment Resources created.

  • VM Name

    REQUIRED. Name of the Virtual Machine to be created as part of the Project.

  • VM Instance Size

    REQUIRED. Specifies the size of the instance. Supported values are as below with their (cores, memory)
        “ExtraSmall” (shared core, 768 MB),
        “Small”      (1 core, 1.75 GB),
        “Medium”     (2 cores, 3.5 GB),
        “Large”      (4 cores, 7 GB),
        “ExtraLarge” (8 cores, 14GB),
        “A5”         (2 cores, 14GB)
        “A6”         (4 cores, 28GB)
        “A7”         (8 cores, 56 GB)

  • Storage Account Name

    OPTIONAL. This parameter should only be set if the runbook is being re-executed after an existing
    and unique Storage Account Name has already been created, or if a new and unique Storage Account Name
    is desired. If left blank, a new and unique Storage Account Name will be created for the Project. The
    format of the derived Storage Account Names is:
        $ProjectName (lowercase) + [Random lowercase letters and numbers] up to a total Length of 23


Other Requirements

  • An existing connection to an Azure subscription

  • The Upload of a VHD to a specified storage container mid-process. At this point in the process, the runbook will intentionally suspend and notify the user; after the upload, the user simply resumes the runbook and the rest of the creation process continues.

  • Six (6) Automation Assets (to be configured in the Assets tab). These are suggested, but not necessarily required. Replacing the “Get-AutomationVariable” calls within this runbook with static or parameter variables is an alternative method. For this example though, the following dependencies exist:
        VARIABLES SET WITH AUTOMATION ASSETS:
             $AGLocation = Get-AutomationVariable -Name ‘AGLocation’
             $GenericStorageContainerName = Get-AutomationVariable -Name ‘GenericStorageContainer’
             $SourceDiskFileExt = Get-AutomationVariable -Name ‘SourceDiskFileExt’
             $VMImageOS = Get-AutomationVariable -Name ‘VMImageOS’
             $AdminUsername = Get-AutomationVariable -Name ‘AdminUsername’
             $Password = Get-AutomationVariable -Name ‘Password’

Note     The entire runbook is heavily checkpointed and can be run multiple times without resource recreation.


Upload of a VHD

Waaaaait a minute! That seems like a pretty big step, how am I going to accomplish that?

I am so glad you asked.

To make this easier (for all of us), I created a separate PowerShell Workflow Script to take care of this step. In fact, it is the same one I used during the creation and testing of New-AzureEnvironmentResources.ps1.

Here it is (the contents of a file I called Upload-LocalVHDtoAzure.ps1):

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
param
(
    [Parameter(Mandatory=$true)]
    [string]$AzureSubscriptionName,
    [Parameter(Mandatory=$true)]
    [string]$ProjectName,
    [Parameter(Mandatory=$true)]
    [string]$StorageAccountName
)

workflow Upload-LocalVHDtoAzure { 

    param 
    ( 
        [string]$StorageContainerName, 
        [string]$VHDName, 
        [string]$SourceVHDPath, 
        [string]$DestinationBlobURI, 
        [bool]$OverWrite 
    ) 
    
    $AzureSubscriptionForWorkflow = Get-AzureSubscription 

    $AzureBlob = Get-AzureStorageBlob -Container $StorageContainerName -Blob $VHDName -ErrorAction SilentlyContinue 
    
    if(!$AzureBlob -or $OverWrite) { 

        $AzureBlob = Add-AzureVhd -LocalFilePath $SourceVHDPath -Destination $DestinationBlobURI -OverWrite:$OverWrite 
    } 

    Return $AzureBlob 

}

$GenericStorageContainerName = “vhds”

$SourceDiskName = “toWindowsAzure” 
$SourceDiskFileExt = “vhd” 
$SourceDiskPath = “D:\Drop\Azure\toAzure” 
$SourceVHDName = “{0}.{1}” -f $SourceDiskName,$SourceDiskFileExt 
$SourceVHDPath = “{0}\{1}” -f $SourceDiskPath,$SourceVHDName 

$DesitnationVHDName = “{0}.{1}” -f $ProjectName,$SourceDiskFileExt 
$DestinationVHDPath = https://{0}.blob.core.windows.net/{1}” -f $StorageAccountName,$GenericStorageContainerName 
$DestinationBlobURI = “{0}/{1}” -f $DestinationVHDPath,$DesitnationVHDName 
$OverWrite = $false 

Select-AzureSubscription -SubscriptionName $AzureSubscriptionName
Set-AzureSubscription -SubscriptionName $AzureSubscriptionName -CurrentStorageAccount $StorageAccountName

$AzureBlobUploadJob = Upload-LocalVHDtoAzure -StorageContainerName $GenericStorageContainerName -VHDName $DesitnationVHDName `
    -SourceVHDPath $SourceVHDPath -DestinationBlobURI $DestinationBlobURI -OverWrite $OverWrite -AsJob 
Receive-Job -Job $AzureBlobUploadJob -AutoRemoveJob -Wait -WriteEvents -WriteJobInResults

Note     This is just one method of uploading a VHD to Azure for a specified Storage Account. I have parameterized the entire script so it could be run from the command line as a PS1 file. Obviously you can do with this as you please.

 


Testing and Proof of Execution

I figured you might want to see the results of my testing during my development of the Provision Azure Environment Resources example…so here are some screen captures from the Azure Automation interface:

Dashboard

image

Runbooks

image

Assets

image

Azure All Items View

You know, to prove that I created something with these scripts…

image

How To : Use Powershell and TFS together

The absolute basics

Where does a newbie to Windows PowerShell start—particularly in regards to TFS? There are a few obvious places. I’m hardly the first person to trip across the natural peanut-butter-and-chocolate nature of TFS and Windows PowerShell together. In fact, the TFS Power Tools contain a set of cmdlets for version control and a few other functions.

Image

There is one issue when downloading them, however. The “typical” installation of the Power Tools leaves out the Windows PowerShell cmdlets! So make sure you choose “custom” and select those Windows PowerShell cmdlets manually.

After they’re installed, you also might need to manually add them to Windows PowerShell before you can start using them. If you try Get-Help for one of the cmdlets and see nothing but an error message, you know you’ll need to do so (and not simply use Update-Help, as the error message implies).

Fortunately, that’s simple. Using the following command will fix the issue:

add-pssnapin Microsoft.TeamFoundation.PowerShell

See the before and after:

Image of command output

A better way to review what’s in the Power Tools and to get the full list of cmdlets installed by the TFS Power Tools is to use:

Get-Command -module Microsoft.TeamFoundation.PowerShell

This method doesn’t depend on the developers including “TFS” in all the cmdlet names. But as it happens, they did follow the Cmdlet Development Guidelines, so both commands return the same results.

Something else I realized when working with the TFS PowerShell cmdlets: for administrative tasks, like those I’m most interested in, you’ll want to launch Windows PowerShell as an administrator. And as long-time Windows PowerShell users already know, if you want to enable the execution of remote scripts, make sure that you set your script execution policy to RemoteSigned. For more information, see How Can I Write and Run a Windows PowerShell Script?.

Of all the cmdlets provided with the TFS Power Tools, one of my personal favorites is Get-TfsServer, which lets me get the instance ID of my server, among other useful things.  My least favorite thing about the cmdlets in the Power Tools? There is little to no useful information for TFS cmdlets in Get-Help. Awkward! (There’s a community bug about this if you want to add your comments or vote on it.)

A different favorite: Get-TFSItemHistory. His following example not only demonstrates the power of the cmdlets, but also some of their limitations:

Get-TfsItemHistory -HistoryItem . -Recurse -Stopafter 5 |

    ForEach-Object { Get-TfsChangeset -ChangesetNumber $_.ChangesetId } |

    Select-Object -ExpandProperty Changes |

    Select-Object -ExpandProperty Item

This snippet gets the last five changesets in or under the current directory, and then it gets the list of files that were changed in those changesets. Sadly, this example also highlights one of the shortcomings of the Power Tools cmdlets: Get-TfsItemHistory cannot be directly piped to Get-TfsChangeset because the former outputs objects with ChangesetId properties, and the latter expects a ChangesetNumber parameter.

One of the nice things is that raw TFS API objects are being returned, and the snap-ins define custom Windows PowerShell formatting rules for these objects. In the previous example, the objects are instances of VersionControl.Client.Item, but the formatting approximates that seen with Get-ChildItem.

So the cmdlets included in the TFS Power Tools are a good place to start if you’re just getting started with TFS and Windows PowerShell, but they’re somewhat limited in scope. Most of them are simply piping results of the tf.exe commands that are already available in TFS. You’ll probably find yourself wanting to do more than just work with these.

 

New version of Prism released – Get it now free!!

Prism helps developers who want to create a Windows Store business app using C#, XAML, the Windows Runtime, and development patterns such as Model-View-ViewModel (MVVM) and event aggregation.

Prism includes two libraries, a reference implementation called AdventureWorks Shopper, Quickstarts and associated documentation.

PrimForWindowsRuntime[1]

 

This is an update from the version released in May for Windows 8.

The guidance demonstrates:

  • How to implement pages, touch, navigation, settings, suspend/resume, search, tiles, and tile notifications.
  • How to implement the Model-View-ViewModel (MVVM) pattern.
  • How to validate user input for correctness.
  • How to manage application data.
  • How to test your app and tune its performance.

What’s new in the Windows 8.1 version

Documentation

  • Created a developer task topic to help developers learn how to complete key Windows Store dev tasks for validation, creating pages, navigation, touch, tiles, search, performance, testing, deployment, extended splash screen, incremental loading, Model-View-ViewModel (MVVM),  loosely coupled communication, and using the Prism libraries.
  • Added the AdventureWorks Shopper logical architecture to help you understand what code you need to write for a Windows Store app vs the code the Prism library provides.
  • Updated PDF for Windows 8.1.
  • Provided release notes on CodePlex including a change log and late breaking news.

AdventureWorks Shopper Reference Implementation

  • Created AutoRotatingGridView grid control to create a fluid page layout that responds to user requests to change the pages size and orientation
  • Demonstrated using the IncrementalUpdateBehavior Blend Behavior for large data to improve user perceived performance
  • Cleaned up styles
  • Used Flyout/MenuFlyout instead of popup
  • Changed FlyoutViews to use SettingsFlyout
  • Used out of the box control for Watermark
  • Used Blend Behaviors
  • Used SearchBox & new search APIs
  • Updated top/bottom app bars to use CommandBars and Action Buttons
  • Used Windows.Web.Http.HttpClient instead of System.Net.Http.HttpClient

Prism for Windows Runtime

  • Updated VisualStateAwarePage to detect page size and orientation
  • Removed FlyoutService and FlyoutView
  • Removed SearchPaneService and SearchQueryArguments. Used new SearchBox control instead.
  • Added support for an extended splashscreen

Quickstarts

  • Created Incremental Loading Quickstart to demonstrate how to improve end user perceived performance for a large grid by handling the ContainerContentChanging event, or by using the IncrementalUpdateBehavior Blend Behavior vs traditional data binding.
  • Created Extended Splash Screen Quickstart to demonstrate how to use the Prism library to create an extended splash screen.

Where to get it?

  • Documentation on the Windows Development Center.
  • PDF version of the documentation will be available later this month.
  • Source code for AdventureWorks Shopper reference implementation and the Prism libraries.
  • Source code for the associated quickstarts.
  • Via NuGet – use NuGet package Manager in Visual Studio and search online for Prism.StoreApps and Prism.PubSubEvents

If you need the source code for the AdventureWorks Shopper reference implementation and the Prism library that runs on Windows 8 we moved it to our CodePlex site.

Where to start?

  • Review the AdventureWorks reference implementation. After you download the code, see Getting started with Prism library for instructions on how to compile and run the reference implementation, as well as understand the Visual Studio solution structure.
  • Review Quickstarts. The Quickstart samples focus on specific tasks such as validation, event aggregation, bootstrapping an MVVM app, and adding an extended splash screen to your app.
  • Create an app. If you want to create your own app using Prism see Using Prism for the Windows Runtime.
  • Explore developer tasks. Learn how the Prism team implemented many of the tasks required to create a Windows Store app.
  • Review the documentation. The associated documentation outlines the key decisions and lessons learned to create a Windows Store business app.
  • Review the release notes. The release notes provide late breaking updates and a more detailed log of the changes in this release.

 

What code do I write and what does Prism library provide?

We included the AdventureWorks logical architecture in the documentation to help you understand what code is provided by the Prism library and what code you will need to create for your Windows Store business app.

 

Logical architecture of a Windows Store business app that uses Prism

Community

Prism for the Windows Runtime has a  community site you can post questions, provide feedback, connect with other users to share ideas, and find additional content such as extensions and training material. Community members can also help Microsoft plan and test future releases of Prism for the Windows Runtime. For more info see patterns & practices: Prism for the Windows Runtime.

So go download the code and get started creating your Windows Store app with Prism. We want to hear about your successes and challenges on our CodePlex site. What else do we need to add to the library and associated documentation? Many of the additions to this release came from user feedback from the CodePlex site.

How to : Use JQuery and JSON in MVC 5 for Autocomplete

Image

Imagine that you want to create edit view for Company entity which has two properties: Name (type string) and Boss (type Person). You want both properties to be editable. For Company.Name simple text input is enough but for Company.Boss you want to use jQuery UI Autocomplete widget. This widget has to meet following requirements:

  • suggestions should appear when user starts typing person’s last name or presses down arrow key;
  • identifier of person selected as boss should be sent to the server;
  • items in the list should provide additional information (first name and date of birth);
  • user has to select one of the suggested items (arbitrary text is not acceptable);
  • the boss property should be validated (with validation message and style set for appropriate input field).

Above requirements appear quite often in web applications. I’ve seen many over-complicated ways in which they were implemented. I want to show you how to do it quickly and cleanly… The assumption is that you have basic knowledge about jQuery UI Autocomplete and ASP.NET MVC. In this post I will show only the code which is related to autocomplete functionality but you can download full demo project here. It’s ASP.NET MVC 5/Entity Framework 6/jQuery UI 1.10.4 project created in Visual Studio 2013 Express for Web and tested in Chrome 34, FF 28 and IE 11 (in 11 and 8 mode).

So here are our domain classes:

public class Company
{
    public int Id { get; set; } 

    [Required]
    public string Name { get; set; }

    [Required]
    public Person Boss { get; set; }
}
public class Person
{
    public int Id { get; set; }

    [Required]
    [DisplayName("First Name")]
    public string FirstName { get; set; }
    
    [Required]
    [DisplayName("Last Name")]
    public string LastName { get; set; }

    [Required]
    [DisplayName("Date of Birth")]
    public DateTime DateOfBirth { get; set; }

    public override string ToString()
    {
        return string.Format("{0}, {1} ({2})", LastName, FirstName, DateOfBirth.ToShortDateString());
    }
}

Nothing fancy there, few properties with standard attributes for validation and good looking display. Person class has ToString override – the text from this method will be used in autocomplete suggestions list.

Edit view for Company is based on this view model:

public class CompanyEditViewModel
{    
    public int Id { get; set; }

    [Required]
    public string Name { get; set; }

    [Required]
    public int BossId { get; set; }

    [Required(ErrorMessage="Please select the boss")]
    [DisplayName("Boss")]
    public string BossLastName { get; set; }
}

Notice that there are two properties for Boss related data.

Below is the part of edit view that is responsible for displaying input field with jQuery UI Autocomplete widget for Boss property:

<div class="form-group">
    @Html.LabelFor(model => model.BossLastName, new { @class = "control-label col-md-2" })
    <div class="col-md-10">
        @Html.TextBoxFor(Model => Model.BossLastName, new { @class = "autocomplete-with-hidden", data_url = Url.Action("GetListForAutocomplete", "Person") })
        @Html.HiddenFor(Model => Model.BossId)
        @Html.ValidationMessageFor(model => model.BossLastName)
    </div>
</div>

form-group and col-md-10 classes belong to Bootstrap framework which is used in MVC 5 web project template – don’t bother with them. BossLastName property is used for label, visible input field and validation message. There’s a hidden input field which stores the identifier of selected boss (Person entity). @Html.TextBoxFor helper which is responsible for rendering visible input field defines a class and a data attribute. autocomplete-with-hidden class marks inputs that should obtain the widget. data-url attribute value is used to inform about the address of action method that provides data for autocomplete. Using Url.Action is better than hardcoding such address in JavaScript file because helper takes into account routing rules which might change.

This is HTML markup that is produced by above Razor code:

<div class="form-group">
    <label class="control-label col-md-2" for="BossLastName">Boss</label>
    <div class="col-md-10">
        <span class="ui-helper-hidden-accessible" role="status" aria-live="polite"></span>
        <input name="BossLastName" class="autocomplete-with-hidden ui-autocomplete-input" id="BossLastName" type="text" value="Kowalski" 
         data-val-required="Please select the boss" data-val="true" data-url="/Person/GetListForAutocomplete" autocomplete="off">
        <input name="BossId" id="BossId" type="hidden" value="4" data-val-required="The BossId field is required." data-val-number="The field BossId must be a number." data-val="true">
        <span class="field-validation-valid" data-valmsg-replace="true" data-valmsg-for="BossLastName"></span>
    </div>
</div>

This is JavaScript code responsible for installing jQuery UI Autocomplete widget:

$(function () {
    $('.autocomplete-with-hidden').autocomplete({
        minLength: 0,
        source: function (request, response) {
            var url = $(this.element).data('url');
   
            $.getJSON(url, { term: request.term }, function (data) {
                response(data);
            })
        },
        select: function (event, ui) {
            $(event.target).next('input[type=hidden]').val(ui.item.id);
        },
        change: function(event, ui) {
            if (!ui.item) {
                $(event.target).val('').next('input[type=hidden]').val('');
            }
        }
    });
})

Widget’s source option is set to a function. This function pulls data from the server by $.getJSON call. URL is extracted from data-url attribute. If you want to control caching or provide error handling you may want to switch to $.ajax function. The purpose of change event handler is to ensure that values for BossId and BossLastName are set only if user selected an item from suggestions list.

This is the action method that provides data for autocomplete:

public JsonResult GetListForAutocomplete(string term)
{               
    Person[] matching = string.IsNullOrWhiteSpace(term) ?
        db.Persons.ToArray() :
        db.Persons.Where(p => p.LastName.ToUpper().StartsWith(term.ToUpper())).ToArray();

    return Json(matching.Select(m => new { id = m.Id, value = m.LastName, label = m.ToString() }), JsonRequestBehavior.AllowGet);
}

value and label are standard properties expected by the widget. label determines the text which is shown in suggestion list, value designate what data is presented in the input filed on which the widget is installed. id is custom property for indicating which Person entity was selected. It is used in select event handler (notice the reference to ui.item.id): Selected ui.item.id is set as a value of hidden input field – this way it will be sent in HTTP request when user decides to save Company data.

Finally this is the controller method responsible for saving Company data:

public ActionResult Edit([Bind(Include="Id,Name,BossId,BossLastName")] CompanyEditViewModel companyEdit)
{
    if (ModelState.IsValid)
    {
        Company company = db.Companies.Find(companyEdit.Id);
        if (company == null)
        {
            return HttpNotFound();
        }

        company.Name = companyEdit.Name;

        Person boss = db.Persons.Find(companyEdit.BossId);
        company.Boss = boss;
        
        db.Entry(company).State = EntityState.Modified;
        db.SaveChanges();
        return RedirectToAction("Index");
    }
    return View(companyEdit);
}

Pretty standard stuff. If you’ve ever used Entity Framework above method should be clear to you. If it’s not, don’t worry. For the purpose of this post the important thing to notice is that we can use companyEdit.BossId because it was properly filled by model binder thanks to our hidden input field.

That’s it, all requirements are met! Easy, huh?

You may be wondering why I want to use jQuery UI widget in Visual Studio 2013 project which by default uses Twitter Bootstrap. It’s true that Bootstrap has some widgets and plugins but after a bit of experimentation I’ve found that for some more complicated scenarios jQ UI does a better job. The set of controls is simply more mature…

How To : Use SharePoint Dashboards & MSRS Reports for your Agile Development Life Cycle

The Problem We Solve

Agile BI is not a term many would associate with MSRS Reports and SharePoint Dashboards. While many organizations first turn to the Microsoft BI stack because of its familiarity, stitching together Microsoft’s patchwork of SharePoint, SQL Server, SSAS, MSRS, and Office creates administrative headaches and requires considerable time spent integrating and writing custom code.

This Showcase outlines the ease of accomplishing three of the most fundamental BI tasks with LogiXML technology as compared to MSRS and SharePoint:

  • Building a dashboard with multiple data sources
  • Creating interactive reports that reduce the load on IT by providing users self-service
  • Integrating disparate data sources

Read below to learn how an agile BI methodology can make your life much easier when it comes to dashboards and reports. Don’t feel like reading?

Building a Dashboard with LogiXML vs. MSRS + SharePoint

Microsoft’s only solution for dashboards is to either write your own code from scratch, manipulate SharePoint to serve a purpose for which it wasn’t initially designed, or look to third party apps. Below are some of the limitations to Microsoft’s approach to dashboards:

  • Limited Pre-Built Elements: Microsoft components come with only limited libraries of pre-built elements. In addition to actual development work, you will need to come up with an idea of how everything will work together. This necessitates becoming familiar with best practices in dashboards and reporting.
  • Sophisticated Development Expertise Required: While Microsoft components provide basic capabilities, anything more sophisticated is development resource-intensive and requires you to take on design, execution, and delivery. Any complex report visualizations and logic, such as interactive filters, must be written in code by the developer.
  • Limited Charts and Visualizations: Microsoft has a smaller sub-set of charts and visualization tools. If you want access to the complete library of .NET-capable charts, you’ll still need to OEM another charting solution at additional expense.
  • Lack of Integrated Workflow: Microsoft does not include workflow features sets out of the box in their BI offering.

LogiXML technology is centered on Logi Studio: an elemental, agile BI design environment which lets you simply choose from hundreds of powerful and configurable pre-built elements. Logi’s pre-built elements equip developers with tools to speed development, as well as the processes and logic required to build and manage BI projects. Below is a screen shot of the Logi Studio while building new dashboards.

agile-bi.jpg

Start a free LogiXML trial now.

Logi developers can easily create static or user-customizable dashboards using the Dashboard element. A dashboard is a collection of panels containing Logi reports, which in turn contain table, charts, images, etc. At runtime, the user can customize the dashboard by rearranging these panels on the browser page, by showing or hiding them, and even by changing their contents using adjustable reporting criteria. The data displayed within the panels can be configured, as in any Logi report, to link to other reports, providing drill-down functionality.

 

logi2.jpg

The dashboard displayed above has tabs and user customization enabled. The Dashboard element provides customization features, such as drag-and-drop panel positioning, support for built-in parameters the user can access to adjust the panel’s data contents, and a panel selection list that determines which panels will be displayed. AJAX techniques are utilized for web server interactions, allowing selective updates of portions of the dashboard. Dashboard customizations can be saved on an individual-user basis to create a highly personalized view of the data.

The Dashboard Wizard

The ‘Create a Dashboard’ wizard assists developers in creating dashboards by populating the report definition with the necessary dashboard-related elements. You can easily point to any data source by selecting from a variety of DataLayer types, including SQL, StoredProcedures, Web Services, Files, and more. A simple to use drag and drop SQL Query builder is also integrated, to offer a guided approach to constructing queries when connecting to your database.

logi3.jpg

Using the Dashboard Element

The Dashboard element is used to create the top level structure for all of your interactive panels within the final output. Under your dashboards, you can optionally add any number of Dashboard Panels, Panel Parameters for dynamic filtering, and even automatic refresh features with AJAX-based refresh timers.

logi4.jpg

Changing Appearance Using Themes and Style Sheets

The appearance of a dashboard can be changed easily by assigning a theme to your report. In addition, or as an alternative, you can change dashboard appearance using style. The Dashboard element has its own Cascading Style Sheet (CSS) file containing predefined classes that affect the display colors, font sizes, button labels, and spacing seen when the dashboard is displayed. You can override these classes by adding classes with the same name to your own style sheet file.

See us build a BI app with 3 data sources in under 10 minutes.

Ad Hoc Reporting Creation with LogiXML: Analysis Grid

The Analysis Grid is a managed reporting feature giving end users virtual ad hoc capability. It is an easy to use tool that allows business users to analyze and manipulate data and outputs in multiple and powerful ways.

logi5.jpg

Start a free LogiXML trial now.

Create an Analysis Grid by using the “Create Analysis Grid” wizard, or by simply adding the AnalysisGrid element into your definition file. Like the dashboard, data for the Analysis Grid can be accessed from any of the data options, including SQL databases, web sources, or files. You also have the option to launch the interactive query builder wizard for easy, drag-drop, SQL query creation.

The Analysis Grid is composed of three main parts: the data grid itself, i.e. a table of data to be analyzed; various action buttons at the top, allowing the user to perform actions such as create new columns with custom calculations, sort columns, add charts, and perform aggregations; and the ability to export the grid to Excel, CSV, or PDF format.

The Analysis Grid makes it easy to perform what-if analyses through features like filtering. The Grid also makes data-presentation impactful through visualization features including data driven color formatting, inline gauges, and custom formula creation.

Ad Hoc Reporting Creation with Microsoft

While simple ad hoc capabilities, such as enabling the selection of parameters like date ranges, can be accomplished quickly and easily with Microsoft, more sophisticated ad hoc analysis is challenging due to the following shortcomings.

Platform Integration Problems

Microsoft BI strategy is not unified and is strongly tied to SQL Server. To obtain analysis capabilities, you must build cubes through to the Analysis Service, which is a separate product with its own different security architecture. Next, you will need to build reports that talk to SQL server, also using separate products.

Dashboards require a SharePoint portal which is, again, a separate product with separate requirements and licensing. If you don’t use this, you must completely code your dashboards from scratch. Unfortunately, Microsoft Reporting Services doesn’t play well with Analysis Services or SharePoint since these were built on different technologies.

SharePoint itself offers an out of the box portal and dashboard solution but unfortunately with a number of significant shortcomings. SharePoint was designed as a document management and collaboration tool as opposed to an interactive BI dashboard solution. Therefore, in order to have a dashboard solution optimized for BI, reporting, and interactivity you are faced with two options:

  • Build it yourself using .NET and a combination of third party components
  • Buy a separate third party product

Many IT professionals find these to be rather unappealing options, since they require evaluating a new product or components, and/or a lot of work to build and make sure it integrates with the rest of the Microsoft stack.

Additionally, while SQL Server and other products support different types of security architectures, Analysis Services only has support for using integrated Windows NT security models to access cubes and therefore creates integration challenges.

Moreover, for client/ad hoc tools, you need Report Writer, a desktop product, or Excel – another desktop application. In addition to requiring separate licenses, these products don’t even talk to one another in the same ways, as they were built by different companies and subsequently acquired by Microsoft.

Each product requires a separate and often disconnected development environment with different design and administration features. Therefore to manage Microsoft BI, you must have all of these development environments available and know how to use them all.

Integration of Various Data Sources: LogiXML vs. Microsoft

LogiXML is data neutral, allowing you to easily connect to all of your organization’s data spread across multiple applications and databases. You can connect with any data source or data model and even combine data sources such as current data accessed through a web service with past data in spreadsheets.

Integration of Various Data Sources with Microsoft

Working with Microsoft components for BI means you will be faced with the challenge of limited support for non-Microsoft based databases and outside data sources. The Microsoft BI stack is centered on SQL Server databases and therefore the data source is optimized to work with SQL Server. Unfortunately, if you need outside content it can be very difficult to integrate.

Finally, Microsoft BI tools are designed with the total Microsoft experience in mind and are therefore optimized for Internet Explorer. While other browsers and devices might be useable, the experience isn’t optimized and may potentially lack in features or visualize differently.

 

Free & Licensed Windows 8, Azure, Office 365, SharePoint On-Premise and Online Tools, Web Parts, Apps available.
For more detail visit https://sharepointsamurai.wordpress.com or contact me at tomas.floyd@outlook.com

SharePoint 2013 Basic Search Center Branding Problem

So, I had thought we were in the clear from the old 2010 Search Center branding disaster.

For the most part custom branding applies pretty easily to search sites in SharePoint 2013 thanks to the fact that it just uses the default Seattle.master for search branding.

?????????????????????????????????????????????

 

However there is a gotcha, specifically related to the Basic Search Center template. I think the problem is only this one template, but maybe there are other areas affected. I tested the Enterprise Search Center and the default search and neither had issues.

Basically what happens is when you are creating your custom branding, chances are you will be applying a customized master page (one that is edited with a mapped drive or SharePoint Designer), and the Basic Search Center uses a snippet of code block to try to hide the ribbon when the Web Part management panel is up (I have no idea why this was so important but I digress).

Okay, “so what” you might think… well code blocks are not permitted to run by default in customized master pages. They will work just fine in a custom master page deployed with a farm solution (according to comments below a sandbox solution will not fix the problem) but they will fail miserably in a customized master page like this:

4-27-2013 4-05-07 PM

So, how do you fix this problem. Well, easiest solution is to package your custom master page into a farm solution and apply it to the site. The error should go away immediately. That doesn’t really help if you are still iterating in development or if you are using SharePoint Online (farm solutions are not allowed there).

Another option is to edit the aspx files on the Basic Search Site. From a mapped drive or from SPD you can edit default.aspx and results.aspx removing this StyleBlock section:



  <SharePoint:StyleBlock runat="server"> 
    <%          
    WebPartManager webPartManager = SPWebPartManager.GetCurrentWebPartManager(this.Page);
    if (webPartManager != null && webPartManager.DisplayMode == SPWebPartManager.BrowseDisplayMode)
    { 
    %>#s4-ribbonrow
    { 
    display: none;
    }
    <%                                          
    }
  %>

Note: one gotcha you may run into with this method is sometimes the search web parts will error on the page when you refresh it. You can fix this by removing the old web parts and re-adding them. I’m not sure why you have to do this sometimes, but it’s a relatively painless fix.

For some of you, editing these search files won’t be an acceptable solution. I’m hopeful someone will create a nice sandbox solution to fix the problem like we had in 2010…

SAP Weekend : Part 2 – Using the Microsoft BizTalk Server for B2B Integration with SharePoint

This is Part 2 of my past weekend’s activities with SharePoint and SAP Integration methods.

 

In this post I am looking at how to use the BizTalk Adapter with SharePoint

 

Topics

  • Abstract
  • Goal
  • Business Scenario
  • Environment
  • Document Flow
  • Integration Steps
  • .NET Support
  • Summary

 

Abstract

In the past few years, the whole perspective of doing business has been moved towards implementing Enterprise Resource Planning Systems for the key areas like marketing, sales and manufacturing operations. Today most of the large organizations which deal with all major world markets, heavily rely on such key areas.

Operational Systems of any organization can be achieved from its worldwide network of marketing teams as well as from manufacturing and distribution techniques. In order to provide customers with realistic information, each of these systems need to be integrated as part of the larger enterprise.

This ultimately results into efficient enterprise overall, providing more reliable information and better customer service. This paper addresses the integration of Biztalk Server and Enterprise Resource Planning System and the need for their integration and their role in the current E-Business scenario.

 

Goal

There are several key business drivers like customers and partners that need to communicate on different fronts for successful business relationship. To achieve this communication, various systems need to get integrated that lead to evaluate and develop B2B Integration Capability and E–Business strategy. This improves the quality of business information at its disposal—to improve delivery times, costs, and offer customers a higher level of overall service.

To provide B2B capabilities, there is a need to give access to the business application data, providing partners with the ability to execute global business transactions. Facing internal integration and business–to–business (B2B) challenges on a global scale, organization needs to look for required solution.

To integrate the worldwide marketing, manufacturing and distribution facilities based on core ERP with variety of information systems, organization needs to come up with strategic deployment of integration technology products and integration service capabilities.

 

Business Scenario

Now take the example of this ABC Manufacturing Company: whose success is the strength of its European-wide trading relationships. Company recognizes the need to strengthen these relationships by processing orders faster and more efficiently than ever before.

The company needed a new platform that could integrate orders from several countries, accepting payments in multiple currencies and translating measurements according to each country’s standards. Now, the bottom line for ABC’s e-strategy was to accelerate order processing. To achieve this: the basic necessity was to eliminate the multiple collections of data and the use of invalid data.

By using less paper, ABC would cut processing costs and speed up the information flow. Keeping this long term goal in mind, ABC Manufacturing Company can now think of integrating its four key countries into a new business-to-business (B2B) platform.

 

Here is another example of this XYZ Marketing Company. Users visit on this company’s website to explore a variety of products for its thousands of customers all over the world. Now this company always understood that they could offer greater benefits to customers if they could more efficiently integrate their customers’ back-end systems. With such integration, customers could enjoy the advantages of highly efficient e-commerce sites, where a visitor on the Web could place an order that would flow smoothly from the website to the customer’s order entry system.

 

Some of those back-end order entry systems are built on the latest, most sophisticated enterprise resource planning (ERP) system on the market, while others are built on legacy systems that have never been upgraded. Different customers requires information formatted in different ways, but XYZ has no elegant way to transform the information coming out of website to meet customer needs. With the traditional approach:

For each new e-commerce customer on the site, XYZ’s staff needs to work for significant amounts of time creating a transformation application that would facilitate the exchange of information. But with better approach: XYZ needs a robust messaging solution that would provide the flexibility and agility to meet a range of customer needs quickly and effectively. Now again XYZ can think of integrating Customer Backend Systems with the help of business-to-business (B2B) platform.

 

Environment

Many large scale organizations maintain a centralized SAP environment as its core enterprise resource planning (ERP) system. The SAP system is used for the management and processing of all global business processes and practices. B2B integration mainly relies on the asynchronous messaging, Electronic Data Interchange (EDI) and XML document transformation mechanisms to facilitate the transformation and exchange of information between any ERP System and other applications including legacy systems.

For business document routing, transformation, and tracking, existing SAP-XML/EDI technology road map needs XML service engine. This will allow development of complex set of mappings from and to SAP to meet internal and external XML/EDI technology and business strategy. Microsoft BizTalk Server is the best choice to handle the data interchange and mapping requirements. BizTalk Server has the most comprehensive development and management support among business-to-business platforms. Microsoft BizTalk Server and BizTalk XML Framework version 2.0 with Simple Object Access Protocol (SOAP) version 1.1 provide precisely the kind of messaging solution that is needed to facilitate integration with cost effective manner.

 

Document Flow

Friends, now let’s look at the actual flow of document from Source System to Customer Target System using BizTalk Server. When a document is created, it is sent to a TCP/IP-based Application Linking and Enabling (ALE) port—a BizTalk-based receive function that is used for XML conversion. Then the document passes the XML to a processing script (VBScript) that is running as a BizTalk Application Integration Component (AIC). The following figure shows how BizTalk Server acts as a hub between applications that reside in two different organizations:

The data is serialized to the customer/vendor XML format using the Extensible Stylesheet Language Transformations (XSLT) generated from the BizTalk Mapper using a BizTalk channel. The XML document is sent using synchronous Hypertext Transfer Protocol Secure (HTTPS) or another requested transport protocol such as the Simple Mail Transfer Protocol (SMTP), as specified by the customer.

The following figure shows steps for XML document transformation:

The total serialized XML result is passed back to the processing script that is running as a BizTalk AIC. An XML “receipt” document then is created and submitted to another BizTalk channel that serializes the XML status document into a SAP IDOC status message. Finally, a Remote Function Call (RFC) is triggered to the SAP instance/client using a compiled C++/VB program to update the SAP IDOC status record. A complete loop of document reconciliation is achieved. If the status is not successful, an e-mail message is created and sent to one of the Support Teams that own the customer/vendor business XML/EDI transactions so that the conflict can be resolved. All of this happens instantaneously in a completely event-driven infrastructure between SAP and BizTalk.

Integration Steps

Let’s talk about a very popular Order Entry and tracking scenario while discussing integration hereafter. The following sections describe the high-level steps required to transmit order information from Order Processing pipeline Component into the SAP/R3 application, and to receive order status update information from the SAP/R3 application.

The integration of AFS purchase order reception with SAP is achieved using the BizTalk Adapter for SAP (BTS-SAP). The IDOC handler is used by the BizTalk Adapter to provide the transactional support for bridging tRFC (Transactional Remote Function Calls) to MSMQ DTC (Distributed Transaction Coordinator). The IDOC handler is a COM object that processes IDOC documents sent from SAP through the Com4ABAP service, and ensures their successful arrival at the appropriate MSMQ destination. The handler supports the methods defined by the SAP tRFC protocol. When integrating purchase order reception with the SAP/R3 application, BizTalk Server (BTS) provides the transformation and messaging functionality, and the BizTalk Adapter for SAP provides the transport and routing functionality.

The following two sequential steps indicate how the whole integration takes place:

  • Purchase order reception integration
  • Order Status Update Integration

Purchase Order Reception Integration

  1. Suppose a new pipeline component is added to the Order Processing pipeline. This component creates an XML document that is equivalent to the OrderForm object that is passed through the pipeline. This XML purchase order is in Commerce Server Order XML v1.0 format, and once created, is sent to a special Microsoft Message Queue (MSMQ) queue created specifically for this purpose.Writing the order from the pipeline to MSMQ:>

    The first step in sending order data to the SAP/R3 application involves building a new pipeline component to run within the Order Processing pipeline. This component must perform the following two tasks:

    A] Make an XML-formatted copy of the OrderForm object that is passing through the order processing pipeline. The GenerateXMLForDictionaryUsingSchema method of the DictionaryXMLTransforms object is used to create the copy.

    Private Function IPipelineComponent_Execute(ByVal objOrderForm As Object, _
        ByVal objContext As Object, ByVal lFlags As Long) As Long
    
    On Error GoTo ERROR_Execute
    
    Dim oXMLTransforms As Object
    Dim oXMLSchema As Object
    Dim oOrderFormXML As Object
    
    ' Return 1 for Success.
    IPipelineComponent_Execute = 1
    
    ' Create a DictionaryXMLTransforms object.
    Set oXMLTransforms = CreateObject("Commerce.DictionaryXMLTransforms")
    
    ' Create a PO schema object.
    Set oXMLSchema = oXMLTransforms.GetXMLFromFile(sSchemaLocation)
    
    ' Create an XML version of the order form.
    Set oOrderFormXML = oXMLTransforms.GenerateXMLForDictionaryUsingSchema_
        (objOrderForm, oXMLSchema)
    
    WritePO2MSMQ sQueueName, oOrderFormXML.xml, PO_TO_ERP_QUEUE_LABEL, _
        sBTSServerName, AFS_PO_MAXTIMETOREACHQUEUE
    
    Exit Function
    
    ERROR_Execute:
    App.LogEvent "QueuePO.CQueuePO -> Execute Error: " & _
    vbCrLf & Err.Description, vbLogEventTypeError
    
    ' Set warning level.
    IPipelineComponent_Execute = 2
    Resume Next
    
    End Function

    B] Send the newly created XML order document to the MSMQ queue defined for this purpose.

    Option Explicit
    
    ' MSMQ constants.
    
    ' Access modes.
    Const MQ_RECEIVE_ACCESS = 1
    Const MQ_SEND_ACCESS = 2
    Const MQ_PEEK_ACCESS = 32
    
    ' Sharing modes. Const MQ_DENY_NONE = 0
    Const MQ_DENY_RECEIVE_SHARE = 1
    
    ' Transaction options. Const MQ_NO_TRANSACTION = 0
    Const MQ_MTS_TRANSACTION = 1
    Const MQ_XA_TRANSACTION = 2
    Const MQ_SINGLE_MESSAGE = 3
    
    ' Error messages.
    Const MQ_ERROR_QUEUE_NOT_EXIST = -1072824317
    
    ' MQ Message ACKNOWLEDGEMENT.
    Const MQMSG_ACKNOWLEDGMENT_FULL_REACH_QUEUE = 5
    Const MQMSG_ACKNOWLEDGMENT_FULL_RECEIVE = 14
    Const DEFAULT_MAX_TIME_TO_REACH_QUEUE = 20
    ' MQ Message ACKNOWLEDGEMENT.
    Const MQMSG_ACKNOWLEDGMENT_FULL_REACH_QUEUE = 5
    Const MQMSG_ACKNOWLEDGMENT_FULL_RECEIVE = 14
    
    Function WritePO2MSMQ(sQueueName As String, sMsgBody As String, _
        sMsgLabel As String, sServerName As String, _
        Optional MaxTimeToReachQueue As Variant) As Long
    
    Dim lMaxTime As Long
    
    If IsMissing(MaxTimeToReachQueue) Then
    lMaxTime = DEFAULT_MAX_TIME_TO_REACH_QUEUE
    Else
    lMaxTime = MaxTimeToReachQueue
    End If
    
    Dim objQueueInfo As MSMQ.MSMQQueueInfo
    Dim objQueue As MSMQ.MSMQQueue, objAdminQueue As MSMQ.MSMQQueue
    Dim objQueueMsg As MSMQ.MSMQMessage
    
    On Error GoTo MSMQ_Error
    
    Set objQueueInfo = New MSMQ.MSMQQueueInfo
    objQueueInfo.FormatName = "DIRECT=OS:" & sServerName & "\PRIVATE$\" & sQueueName
    
    Set objQueue = objQueueInfo.Open(MQ_SEND_ACCESS, MQ_DENY_NONE)
    
    Set objQueueMsg = New MSMQ.MSMQMessage
    
    objQueueMsg.Label = sMsgLabel ' Set the message label property
    objQueueMsg.Body = sMsgBody ' Set the message body property
    objQueueMsg.Ack = MQMSG_ACKNOWLEDGMENT_FULL_REACH_QUEUE
    objQueueMsg.MaxTimeToReachQueue = lMaxTime
    
    objQueueMsg.send objQueue, MQ_SINGLE_MESSAGE
    
    objQueue.Close
    
    On Error Resume Next
    Set objQueueMsg = Nothing
    Set objQueue = Nothing
    Set objQueueInfo = Nothing
    
    Exit Function
    
    MSMQ_Error:
    App.LogEvent "Error in WritePO2MSMQ: " & Error
    Resume Next
    
    End Function
    
  2. A BTS MSMQ receive function picks up the document from the MSMQ queue and sends it to a BTS channel that has been configured for this purpose. Receiving the XML order from MSMQ: The second step in sending order data to the SAP/R3 application involves BTS receiving the order data from the MSMQ queue into which it was placed at the end of the first step. You must configure a BTS MSMQ receive function to monitor the MSMQ queue to which the XML order was sent in the previous step. This receive function forwards the XML message to the configured BTS channel for transformation.
  3. The third step in sending order data to the SAP/R3 application involves BTS transforming the order data from Commerce Server Order XML v1.0 format into ORDERS01 IDOC format. A BTS channel must be configured to perform this transformation. After the transformation is complete, the BTS channel sends the resulting ORDERS01 IDOC message to the corresponding BTS messaging port. The BTS messaging port is configured to send the transformed message to an MSMQ queue called the 840 Queue. Once the message is placed in this queue, the BizTalk Adapter for SAP is responsible for further processing. 
  4. BizTalk Adapter for SAP sends the ORDERS01document to the DCOM Connector (Get more information on DCOM Connector from www.sap.com/bapi), which writes the order to the SAP/R3 application. The DCOM Connector is an SAP software product that provides a mechanism to send data to, and receive data from, an SAP system. When an IDOC message is placed in the 840 Queue, the DOM Connector retrieves the message and sends it to SAP for processing. Although this processing is in the domain of the BizTalk Adapter for SAP, the steps involved are reviewed here as background information:
    • Determine the version of the IDOC schema in use and generate a BizTalk Server document specification.
    • Create a routing key from the contents of the Control Record of the IDOC schema.
    • Request a SAP Destination from the Manager Data Store given the constructed routing key.
    • Submit the IDOC message to the SAP System using the DCOM Connector 4.6D Submit functionality.

Order Status Update Integration

Order status update integration can be achieved by providing a mechanism for sending information about updates made within the SAP/R3 application back to the Commerce Server order system.

The following sequence of steps describes such a mechanism:

  1. BizTalk Adapter for SAP processing:
    After a user has updated a purchase order using the SAP client, and the IDOC has been submitted to the appropriate tRFC port, the BizTalk Adapter for SAP uses the DCOM connector to send the resulting information to the 840 Queue, packaged as an ORDERS01 IDOC message. The 840 Queue is an MSMQ queue into which the BizTalk Adapter for SAP places IDOC messages so that they can be retrieved and processed by interested parties. This process is within the domain of the BizTalk Adapter for SAP, and is used by this solution to achieve the order update integration.
  2. Receiving the ORDERS01 IDOC message from MSMQ:
    The second step in updating order status from the SAP/R3 application involves BTS receiving ORDERS01 IDOC message from the MSMQ queue (840 Queue) into which it was placed at the end of the first step. You must configure a BTS MSMQ receive function to monitor the 840 Queue into which the XML order status message was placed. This receive function must be configured to forward the XML message to the configured BTS channel for transformation.
  3. Transforming the order update from IDOC format:
    Using a BTS MSMQ receive function, the document is retrieved and passed to a BTS transformation channel. The BTS channel transforms the ORDERS01 IDOC message into Commerce Server Order XML v1.0 format, and then forwards it to the corresponding BTS messaging port. You must configure a BTS channel to perform this transformation.The following BizTalk Server (BTS) map demonstrates in the prototyping of this solution for transforming an SAP ORDERS01 IDOC message into an XML document in Commerce Server Order XML v1.0 format. It allows a change to an order in the SAP/R3 application to be reflected in the Commerce Server orders database.

    This map used in the prototype only maps the order ID, demonstrating how the order in the SAP/R3 application can be synchronized with the order in the Commerce Server orders database. The mapping of other fields is specific to a particular implementation, and was not done for the prototype.

< xsl:stylesheet xmlns:xsl='http://www.w3.org/1999/XSL/Transform' 
xmlns:msxsl='urn:schemas-microsoft-com:xslt' xmlns:var='urn:var' 
xmlns:user='urn:user' exclude-result-prefixes='msxsl var user' 
version='1.0'>
< xsl:output method='xml' omit-xml-declaration='yes' />
< xsl:template match='/'>
< xsl:apply-templates select='ORDERS01'/>
< /xsl:template>
< xsl:template match='ORDERS01'>
< orderform>

'Connection from source node "BELNR" to destination node "OrderID"

< xsl:if test='E2EDK02/@BELNR'>
< xsl:attribute name='OrderID'>
; < xsl:value-of select='E2EDK02/@BELNR'/>
< /xsl:attribute>
< /xsl:if>
< /orderform>
< /xsl:template>
< /xsl:stylesheet>

The BTS message port posts the transformed order update document to the configured ASP page for further processing. The configured ASP page retrieves the message posted to it and uses the Commerce Server OrderGroupManager and OrderGroup objects to update the order status information in the Commerce Server orders database.

  • Updating the Commerce Server order system:
    The fourth step in updating order status from the SAP/R3 application involves updating the Commerce Server order system to reflect the change in status. This is accomplished by adding the page _OrderStatusUpdate.asp to the AFS Solution Site and configuring the BTS messaging port to post the transformed XML document to that page. The update is performed using the Commerce Server OrderGroupManager and OrderGroup objects.
  •  

    The routine ProcessOrderStatus is the primary routine in the page. It uses the DOM and XPath to extract enough information to find the appropriate order using the OrderGroupManager object. Once the correct order is located, it is loaded into an OrderGroup object so that any of the entries in the OrderGroup object can be updated as needed.

    The following code implements page _OrderStatusUpdate.asp:

    < %@ Language="VBScript" %>
    
    < % 
    const TEMPORARY_FOLDER = 2
    
    call Main()
    
    Sub Main()
    call ProcessOrderStatus( ParseRequestForm() )
    End Sub
    
    Sub ProcessOrderStatus(sDocument)
    
    Dim oOrderGroupMgr 
    Dim oOrderGroup 
    Dim rs
    Dim sPONum
    Dim oAttr 
    Dim vResult
    Dim vTracking 
    Dim oXML
    Dim dictConfig
    Dim oElement
    
    Set oOrderGroupMgr = Server.CreateObject("CS_Req.OrderGroupManager")
    Set oOrderGroup = Server.CreateObject("CS_Req.OrderGroup")
    
    Set oXML = Server.CreateObject("MSXML.DOMDocument")
    oXML.async = False
    
    If oXML.loadXML (sDocument) Then
    
    ' Get the orderform element.
    Set oElement = oXML.selectSingleNode("/orderform")
    
    ' Get the poNum.
    sPONum = oElement.getAttribute("OrderID")
    
    Set dictConfig = Application("MSCSAppConfig").GetOptionsDictionary("")
    
    ' Use ordergroupmgr to find the order by OrderID.
    oOrderGroupMgr.Initialize (dictConfig.s_CatalogConnectionString)
    Set rs = oOrderGroupMgr.Find(Array("order_requisition_number='" sPONum & "'"), _
        Array(""), Array(""))
    
    If rs.EOF And rs.BOF Then
    'Create a new one. - Not implemented in this version.
    Else
    ' Edit the current one.
    oOrderGroup.Initialize dictConfig.s_CatalogConnectionString, rs("User_ID")
    
    ' Load the found order.
    oOrderGroup.LoadOrder rs("ordergroup_id")
    
    ' For the purposes of prototype, we only update the status
    oOrderGroup.Value.order_status_code = 2 ' 2 = Saved order
    
    ' Save it
    vResult = oOrderGroup.SaveAsOrder(vTracking)
    
    End If
    Else
    WriteError "Unable to load received XML into DOM."
    End If
    
    End Sub Function ParseRequestForm()
    
    Dim PostedDocument
    Dim ContentType
    Dim CharSet
    Dim EntityBody
    Dim Stream
    Dim StartPos
    Dim EndPos
    
    ContentType = Request.ServerVariables( "CONTENT_TYPE" )
    
    ' Determine request entity body character set (default to us-ascii).
    CharSet = "us-ascii"
    StartPos = InStr( 1, ContentType, "CharSet=""", 1)
    If (StartPos > 0 ) then
    StartPos = StartPos + Len("CharSet=""")
    EndPos = InStr( StartPos, ContentType, """",1 )
    CharSet = Mid (ContentType, StartPos, EndPos - StartPos )
    End If
    
    ' Check for multipart MIME message.
    PostedDocument = ""
    
    if ( ContentType = "" or Request.TotalBytes = 0) then
    
    ' Content-Type is required as well as an entity body.
    Response.Status = "406 Not Acceptable"
    Response.Write "Content-type or Entity body is missing" & VbCrlf
    Response.Write "Message headers follow below:" & VbCrlf
    Response.Write Request.ServerVariables("ALL_RAW") & VbCrlf
    Response.End
    Else
    If ( InStr( 1,ContentType,"multipart/" ) >

    .NET Support

    This Multi-Tier Application Environment can be implemented successfully with the help of Web portal which utilizes the Microsoft .NET Enterprise Server model. The Microsoft BizTalk Server Toolkit for Microsoft .NET provides the ability to leverage the power of XML Web services and Visual Studio .NET to build dynamic, transaction-based, fault-tolerant systems with full access to existing applications.

    Summary

    Microsoft BizTalk Server can help organizations quickly establish and manage Internet relationships with other organizations. It makes it possible for them to automate document interchange with any other organization, regardless of the conversion requirements and data formats used. This provides a cost-effective approach for integrating business processes across large Enterprises Resource Planning Systems. Integration process designed to facilitate collaborative e-commerce business processes. The process includes a document interchange engine, a business process execution engine, and a set of business document and server management tools. In addition, a business document editor and mapper tools are provided for managing trading partner relationships, administering server clusters, and tracking transactions.

    References

    SAP Weekend : Part 1 – ERPConnect Services for SharePoint 2010 (ECS)

    This weekend was spent completing my new “List Search Web Part” and also 2 Free Web Parts that is included in the “List Web Part Pack” – More about this in my future blog.

    erp256-bc0e84ce

    In between the “SAP Bug” bit me again and I decided to write a  blog post series on the various adapters I have used in SharePoint and SAP Integration Projects and to give you a basic “run down” of how and with which technologies each adapter connects the 2 systems with.

    ERPConnect was 1st on the list. ….

     

    Yes, I can hear the grumblings of those of us who have worked with SAP and SharePoint  Integration and the ERPConnect adapter before 🙂

    For starters, you need to have a SAP Developer Key to be allowed to use the SAP web service wizard, and also have the required SAP authorizations. In other cases it may not be allowed by IT operations to make any modification to the SAP environment, even if it’s limited to the full-automatic generation and activation of the BAPI webservice(s).

    Another reason from a system architecture viewpoint, is that the single BAPI and/or RFC calls may be of too low granularity. You actually want to perform a ‘business transaction’, consisting of multiple method invocations which must be treated as a Logical Unit of Work (LUW). SAP has introduced the concept of SAP Enterprise Services for this, and has delivered a first set of them. This is by far not complete yet, and SAP will augment it the coming years.

    SharePoint 2010 provides developer with the capability to integrate external data sources like SAP business data via the Business Connectivity Services (BCS) into the SharePoint system. The concept of BCS is based on entities and associated stereotyped operations. This perfectly suits for flat and simple structured data sets like SAP tables.

    Another and way more flexible option to use SAP data in SharePoint are the ERPConnect Services for SharePoint 2010 (ECS). The product suite consists of three product components: ERPConnect Services runtime, the BCS Connector application and the Xtract PPS for PerformancePoint Services.

    The runtime is providing a Service Application that integrates itself with the new service architecture of SharePoint 2010. The runtime offers a secure middle-tier layer to integrate different kind of SAP objects in your SharePoint applications, like tables and function modules.

    The BCS Connector application allows developers to create BDC models for the BCS Services, without programming knowledge. You may export the BDC models created by the BCS Connector to Visual Studio 2010 for further customizing.

     

    The Xtract PPS component offers a SAP data source provider for the PerformancePoint Services of SharePoint 2010. T

    his article gives you an overview of the ERPConnect Services runtime and shows how you can create and incorporate business data from SAP in different SharePoint application types, like Web Parts, Application Pages or Silverlight modules.

    This article does not introduce the other components.

    Background

    This section will give you a short explanation and background of SAP objects that can be used in ERPConnect Services. The most important objects are SAP tables and function modules. A function module is basically similar to a normal procedure in conventional programming languages. Function modules are written in ABAP, the SAP programming language, and are accessible from any other programs within a SAP system. They accept import and export parameters as well as other kind of special parameters.

     

    In addition, BAPIs (Business-API) are special function modules that are organized within the SAP Business Object Repository. In order to use function modules with the runtime they must be marked as Remote (RFC). SAP table data can also be retrieved. Tables in SAP are basically relational database tables. Others SAP objects like BW Cubes or SAP Queries can be accessed via the XtractQL query language (see below).

     

    Installation & Configuration

    Installing ERPConnect Services on a SharePoint 2010 server is done by an installer and is straight forward. The SharePoint Administration Service must run on the local server (see Windows Services).

    For more information see product documentation. After the installation has been successfully processed navigate to the Service Applications screen within the central administration (CA) of SharePoint:

     

    Before creating your first Service Application a Secure Store must be created, where ERPConnect Services will save SAP user credentials. In the settings page for the “Secure Store Service” create a new Target Application and name the application “ERPConnect Services”. Click on the button “Next” to define the store fields as follows:

     

    Finish the creation process by clicking on “Next” and define application administrators. Then, mark the application, click “Set Credentials” and enter the SAP user credentials:

     

    Let’s go on and create a new ERPConnect Service Application!

    Click the “ERPConnect Service Application” link in the “New” menu of the Service Applications page (see also first screenshot above). This opens a dialog to define the name of the service application, the SAP connection data and the IIS application pool:

     

    Click “Create” after entering all data and you will see the following entries in the Service Applications screen:

     

    That’s it! You are now done setting up your first ERPConnect Service Application.

    Development

    The runtime functionality covers different programming demands such as generically retrievable interface functions. The service applications are managed by the Central Administration of SharePoint. The following service and function areas are provided:

    1. Executing and retrieving data directly from SAP tables
    2. Executing SAP function modules / BAPIs
    3. Executing XtractQL query statements

    The next sections shows how to use these service and function areas and access different SAP objects from within your custom SharePoint applications using the ERPConnect Services. The runtime can be used in applications within the SharePoint context like Web Parts or Application Pages.

    In order to do so, you need to reference the assembly i in the project. Before you can access data from the SAP system you must create an instance of the ERPConnectServiceClient class. This is the gate to all SAP objects and the generic API of the runtime in overall. In the SharePoint context there are two options to create a client object instance:

    // Option #1
    ERPConnectServiceClient client = new ERPConnectServiceClient();
    
    // Option #2
    ERPConnectServiceApplicationProxy proxy = SPServiceContext.Current.GetDefaultProxy(
       typeof(ERPConnectServiceApplicationProxy)) as ERPConnectServiceApplicationProxy;
    ERPConnectServiceClient client = proxy.GetClient();

    For more details on using ECS in Silverlight or desktop applications see the specific sections below.

    Querying Tables

    Querying and retrieving table data is a common task for developers. The runtime allows retrieving data directly from SAP tables. The ERPConnectServiceClient class provides a method called ExecuteTableQuery with two overrides which query SAP tables in a simple way.

    The method also supports a way to pass miscellaneous parameters like row count and skip, custom function, where clause definition and a returning field list. These parameters can be defined by using the ExecuteTableQuerySettings class instance.

    DataTable dt = client.ExecuteTableQuery("T001");
    
    …
        
    ExecuteTableQuerySettings settings = new ExecuteTableQuerySettings {
      RowCount = 100,
      WhereClause = "ORT01 = 'Paris' AND LAND1 = 'FR'",
      Fields = new ERPCollection<string> { "BUKRS", "BUTXT", "ORT01", "LAND1" }
    };
    
    DataTable dt = client.ExecuteTableQuery("T001", settings);
    
    …
    
    // Sample 2
    DataTable dt = client.ExecuteTableQuery("MAKT",
                new ExecuteTableQuerySettings {
                    RowCount = 10,
                    WhereClause = "MATNR = '60-100C'",
                    OrderClause = "SPRAS DESC"
                });

    The first query reads all records from the SAP table T001 where the fields ORT01 equals Paris and LAND1 equals FR (France). The query returns the top 100 records and the result set contains only the fields BUKRS, BUTXT, ORT01 and LAND1.

    The second query returns the top ten records of the SAP table MAKT, where the field MATNR equals the material number 60-100C. The result set is ordered by the field SPRAS.

    Executing Function Modules

    In addition to query SAP tables the runtime API executes SAP function modules (BAPIs). Function modules must be marked as remote-enabled modules (RFC) within SAP.

    The ERPConnectServiceClient class provides a method called CreateFunction to create a structure of metadata for the function module. The method returns an instance of the data structure ERPFunction. This object instance contains all parameters types (import, export, changing and tables) that can be used with function modules.

    In the sample below we call the function SD_RFC_CUSTOMER_GET and pass a name pattern (T*) for the export parameter with name NAME1. Then we call the Execute method on the ERPFunction instance. Once the method has been executed the data structure is updated. The function returns all customers in the table CUSTOMER_T.

    ERPFunction function = client.CreateFunction("SD_RFC_CUSTOMER_GET");
    function.Exports["NAME1"].ParamValue = "T*";
    function.Execute();
    
    foreach(ERPStructure row in function.Tables["CUSTOMER_T"])
      Console.WriteLine(row["NAME1"] + ", " + row["ORT01"]);

    The following code shows an additional sample. Before we can execute this function module we need to define a table with HR data as input parameter.

    The parameters you need and what values the function module is returning dependents on the implementation of the function module

    ERPFunction function = client.CreateFunction("BAPI_CATIMESHEETMGR_INSERT");
    function.Exports["PROFILE"].ParamValue = "TEST";
    function.Exports["TESTRUN"].ParamValue = "X";
    
    ERPTable records = function.Tables["CATSRECORDS_IN"];
    ERPStructure r1 = records.AddRow();
    r1["EMPLOYEENUMBER"] = "100096";
    r1["WORKDATE"] = "20110704";
    r1["ABS_ATT_TYPE"] = "0001";
    r1["CATSHOURS"] = (decimal)8.0;
    r1["UNIT"] = "H";
    
    function.Execute();
    
    ERPTable ret = function.Tables["RETURN"]; 
    
    foreach(var i in ret)
      Console.WriteLine("{0} - {1}", i["TYPE"], i["MESSAGE"]);

    Executing XtractQL Query Statements

    The ECS runtime is offering a SAP query language called XtractQL. The XtractQL query language, also known as XQL, consists of ABAP and SQL syntax elements. XtractQL allows querying SAP tables, BW-Cubes, SAP Queries and executing function modules.

    It’s possible to return metadata for the objects and MDX statements can also be executed with XQL. All XQL queries are returning a data table object as result set. In case of the execution of function modules the caller must define the returning table (see sample below – INTO @RETVAL). XQL is very useful in situations where you need to handle dynamic statements. The following list shows a

    SELECT TOP 5 * FROM T001W WHERE FABKL = 'US'

    This query selects the top 5 records of the SAP table T001W where the field FABKL equals the value US.

    SELECT * FROM MARA WITH-OPTIONS(CUSTOMFUNCTIONNAME = 'Z_XTRACT_IS_TABLE')

     

    SELECT MAKTX AS [ShortDesc], MANDT, SPRAS AS Language FROM MAKT

    This query selects all records of the SAP table MAKT. The result set will contains three fields named ShortDesc, MANDT and Language.

     

    EXECUTE FUNCTION 'SD_RFC_CUSTOMER_GET'
       EXPORTS KUNNR='0000003340'
       TABLES CUSTOMER_T INTO @RETVAL;

    This query executes the SAP function module SD_RFC_CUSTOMER_GET and returns as result the table CUSTOMER_T (defined as @RETVAL).

    DESCRIBE FUNCTION 'SD_RFC_CUSTOMER_GET' GET EXPORTS

    This query returns metadata about the export parameters of the

    SELECT TOP 30 LIPS-LFIMG, LIPS-MATNR, TEXT_LIKP_KUNNR AS CustomerID
       FROM QUERY 'S|ZTHEO02|ZLIKP'
       WHERE SP$00002 BT '0080011000'AND '0080011999'

    This statement executes the SAP Query “S|ZTHEO02|ZLIKP” (name includes the workspace, user group and the query name). As you can see XtractQL extends the SQL syntax with ABAP or SAP specific syntax elements. This way you can define fields using the LIPS-MATNR format and SAP-like where clauses like “SP$00002 BT ‘0080011000’AND ‘0080011999’”.

    ERPConnect Services provides a little helper tool, the XtractQL Explorer (see screenshot below), to learn more about the query language and to test XQL queries. You can use this tool independent of SharePoint, but you need access to a SAP system.

    To find out more about all XtractQL language syntax see the product manual.

    Silverlight And Desktop Applications

    So far all samples are using the assembly ERPConnectServices.Server.Common.dll as project reference and all code snippets shown run within the SharePoint context, e.g. Web Part.

    ERPConnect Services also provides client libraries for Silverlight and desktop applications:

    ERPConnectServices.Client.dll for Desktop applications
    ERPConnectServices.Client.Silverlight.dll for Silverlight applications

    You need to add the references depending what project you are implementing.

    In Silverlight the implementation and design pattern is a little bit more complicated, since all web services will be called in asynchronously. It’s also not possible to use the DataTable class. It’s just not implemented for Silverlight.

    The runtime provides a similar class called ERPDataTable, which is used in this cases by the API. The ERPConnectServiceClient class for Silverlight provides the method ExecuteTableQueryAsync and an event called ExecuteTableQueryCompleted as callback delegate.

    public event EventHandler<ExecuteTableQueryCompletedEventArgs> ExecuteTableQueryCompleted;
    
    public void ExecuteTableQueryAsync(string tableName)
    public void ExecuteTableQueryAsync(string tableName, ExecuteTableQuerySettings settings)

    The following code sample shows a simple query of the SAP table T001 within a Silverlight client.

    First of all, an instance of the ERPConnectServiceClient is created using the URI of the ERPConnectService.svc, then a delegate is defined to handle the complete callback. Next, the query is executed, defined with a RowCount equal 10 to only return the top 10 records in the result set.

    Once the result is returned the data set will be attached to a DataGrid control (see screenshot below) within the callback method.

     

    void OnGetTableDataButtonClick(object sender, RoutedEventArgs e)
    {
      ERPConnectServiceClient client = new ERPConnectServiceClient(
        new Uri("http://<SERVERNAME>/_vti_bin/ERPConnectService.svc"));
    
      client.ExecuteTableQueryCompleted += OnExecuteTableQueryCompleted;
      client.ExecuteTableQueryAsync("T001", 
        new ExecuteTableQuerySettings { RowCount = 150 });
    }
    
    void OnExecuteTableQueryCompleted(object sender, ExecuteTableQueryCompletedEventArgs e)
    {
      if(e.Error != null)
        MessageBox.Show(e.Error.Message);
      else
      {
        e.Table.View.GroupDescriptions.Add(new PropertyGroupDescription("ORT01"));
        TableGrid.ItemsSource = e.Table.View;
      }
    }

    The screenshot below shows the XAML of the Silverlight page:

    The final result can be seen below:

    ECS Designer

    ERPConnect Services includes a Visual Studio 2010 plugin, the ECS Designer, that allows developer to visually design SAP interfaces. It’s working similar to the LINQ to SAP Designer I have written about a while ago, see article at CodeProject: LINQ to SAP.

    The ECS Designer is not automatically installed once you install the product. You need to call the installation program manually. The setup adds a new project item type to Visual Studio 2010 with the file extension .ecs and is linking it with the designer. The needed references are added automatically after adding an ECS project item.

    The designer generates source code to integrate with the ERPConnect Services runtime after the project item is saved. The generated context class contains methods and sub-classes that represent the defined SAP objects (see screenshots below).

     

    Before you access the SAP system for the first time you will be asked to enter the connection data. You may also load the connection data from SharePoint system. The designer GUI is shown in the screenshots below:

     

    The screenshot above for instance shows the tables dialog. After clicking the Add (+) button in the main designer screen and searching a SAP table in the search dialog, the designer opens the tables dialog.

     

    In this dialog you can change the name of the generated class, the class modifier and all needed properties (fields) the final class should contain.

     

    To preview your selection press the Preview button. The next screenshot shows the automatically generated classes in the file named EC1.Designer.cs:

     

    Using the generated code is simple. The project type we are using for this sample is a standard console application, therefore the designer is referencing the ERPConnectServices.Client.dll for desktop applications.

    Since we are not within the SharePoint context, we have to define the URI of the SharePoint system by passing this value into the constructor of the ERPConnectServicesContext class.

    The designer has generated class MAKT and an access property MAKTList for the context class of the table MAKT. The type of this property MAKTList is ERPTableQuery<MAKT>, which is a LINQ queryable data type.

     

    This means you can use LINQ statements to define the underlying query. Internally, the ERPTableQuery<T> type will translate your LINQ query into call of ExecuteTableQuery.

     

    That’s it!

     

    Advanced Techniques

    There are situations when you have to use the exact same SAP connection while calling a series of function modules in order to receive the correct result. Let’s take the following code:

    ERPConnectServiceClient client = new ERPConnectServiceClient();
    
    using(client.BeginConnectionScope())
    {
      ERPFunction f = client.CreateFunction("BAPI_GOODSMVT_CREATE");
    
      ERPStructure s = f.Exports["GOODSMVT_HEADER"].ToStructure();
      s["PSTNG_DATE"] = "20110609"; // Posting Date in the Document
      s["PR_UNAME"] = "BAEURLE";    // UserName
      s["HEADER_TXT"] = "XXX";      // HeaderText
      s["DOC_DATE"] = "20110609";   // Document Date in Document
    
      f.Exports["GOODSMVT_CODE"].ToStructure()["GM_CODE"] = "01";
    
      ERPStructure r = f.Tables["GOODSMVT_ITEM"].AddRow();
      r["PLANT"] = "1000";          // Plant
      r["PO_NUMBER"] = "4500017210"; // Purchase Order Number
      r["PO_ITEM"] = "010";      // Item Number of Purchasing Document 
      r["ENTRY_QNT"] = 1;          // Quantity in Unit of Entry
      r["MOVE_TYPE"] = "101";        // Movement Type
      r["MVT_IND"] = "B";            // Movement Indicator
      r["STGE_LOC"] = "0001";        // Storage Location
    
      f.Execute();
    
      string matDocument = f.Imports["MATERIALDOCUMENT"].ParamValue as string;
      string matDocumentYear = f.Imports["MATDOCUMENTYEAR"].ParamValue as string;
    
      ERPTable ret = f.Tables["RETURN"]; //.ToADOTable();
    
      foreach(var i in ret)
        Console.WriteLine("{0} - {1}", i["TYPE"], i["MESSAGE"]);
    
      ERPFunction fCommit = client.CreateFunction("BAPI_TRANSACTION_COMMIT");
      fCommit.Exports["WAIT"].ParamValue = "X";
      fCommit.Execute();
    }

    In this sample we create a goods receipt for a goods movement with BAPI_GOODSMVT_CREATE. The final call to BAPI_TRANSACTION_COMMIT will only work, if the system under the hood is using the same connection object.

     

    The runtime is not providing direct access to the underlying SAP connection, but the library offers a mechanism called connection scoping. You may create a new connection scope with the client library and telling ECS to use the same SAP connection until you close the connection scope. Within the connection scope every library call will use the same SAP connection.

    In order to create a new connection scope you need to call the BeginConnectionScope method of the class ERPConnectServiceClient.

    The method returns an IDisposable object, which can be used in conjunction with the using statement of C# to end the connection scope.

    Alternatively, you may call the EndConnectionScope method. It’s also possible to use function modules with nested structures as parameters.

    This is a special construct of SAP. The goods receipt sample above is using a nested structure for the export parameter GOODSMVT_CODE. For more detailed information about nested structures and tables see the product documentation.

    SharePoint 2013 – Creating a Word document with OOXML

    This solution is based on the SharePoint-hosted app template provided by Visual Studio 2012. The solution enumerates through each document library in the host website, and adds the library to a drop-down list.

    2008040211105590dad[1]

     

    When the user selects a library and clicks a tile, the app creates a sample Word 2013 document by using OOXML in the selected library.

    Prerequisites

    This sample requires the following:

    • Visual Studio 2012
    • Office Developer Tools for Visual Studio 2012
    • Either of the following:
      • SharePoint Server 2013 configured to host apps, and with a Developer Site collection already created; or,
      • Access to an Office 365 Developer Site configured to host apps.

    Key components of the sample

    The sample app contains the following:

    • The Default.aspx webpage, which is used to enumerate through each document library in the host website, and render tiles for each MP4 video in the app.
    • The Point8020Metro.css style sheet (in the CSS folder) which contains some simple styles for rendering tiles.
    • The AppManifest.xml file, which has been edited to specify that the app requests Full Control permissions for the hosting web.
    • References to the DocumentFormat.OpenXml assembly provided by the OpenXML SDK 2.5.

    All other files are automatically provided by the Visual Studio project template for apps for SharePoint, and they have not been modified in the development of this sample.

    Configure the sample

    Follow these steps to configure the sample.

    1. Open the SP_Autohosted_OOXML_cs.sln file using Visual Studio 2012.
    2. In the Properties window, add the full URL to your SharePoint Server 2013 Developer Site collection or Office 365 Developer Site to the Site URL property.

    No other configuration is required.

    Build the sample

    To build the sample, press CTRL+SHIFT+B.

    Run and test the sample

    To run and test the sample, do the following:

    1. Press F5 to run the app.
    2. Sign in to your SharePoint Server 2013 Developer Site collection or Office 365 Developer Site if you are prompted to do so by the browser.
    3. Trust the app when you are prompted to do so.

    The following images illustrate the app. In Figure 1 the app has been trusted and libraries added to the drop-down list.

    Figure 1. View of the app with drop-down list

    Figure 1

    In Figure 2, the user has clicked the orange tile. The document is created and the red tile provides a link to the appropriate library (Figure 3), which the user reaches by clicking on the red tile.

    Figure 2. Open XML document creator

    Figure 2

    Figure 3. Document library

    Figure 3

    Troubleshooting

    Ensure that you have SharePoint Server 2013 configured to host apps (with a Developer Site Collection already created), or that you have signed up for an Office 365 Developer Site configured to host apps.

    Change log

    First release: January 30, 2013.

    Related content

    Developing a Real Outlook Social Connector

    This section contains a set of four Visual How Tos that shows how to develop a real provider for the Microsoft Outlook Social Connector (OSC) by using the OSC Provider Proxy Library.

    Outlook.com_[1]

    An OSC provider allows Outlook users to view, in the People Pane, an aggregation of social information updates that are applied on a professional or social network site. An OSC provider is a Component Object Model (COM) DLL. The OSC provider extensibility interfaces form the medium through which the OSC and an OSC provider communicate. OSC provider extensibility consists of a set of interfaces that is available as an open platform. These interfaces allow the OSC to access social network data in a way that is independent of the APIs of each social network. An OSC provider obtains social network data from the corresponding social network and, through implementing the extensibility interfaces, feeds that social network data to the OSC.

    The OSC Provider Proxy Library simplifies the implementation of the OSC provider extensibility interfaces. Instead of a provider explicitly implementing the OSC provider extensibility interfaces, the proxy library implements them, to call a set of abstract and virtual methods in the proxy library.

    A provider, in turn, overrides this set of abstract and virtual methods with the business logic specific to the social network, to return social network data that the OSC requires.

    To show how a provider can use the OSC Provider Proxy Library, this set of Visual How Tos describes a real provider for OfficeTalk. OfficeTalk is a social network in a private corporate environment and is not publicly available.

    Nonetheless, it is a good example of the kind of social network that you might want to develop a custom OSC provider for. You can use the procedures for creating the OSC provider for OfficeTalk to create a custom OSC provider for any social network.

    Developing a Real Outlook Social Connector Provider by Using a Proxy Library

     

    Overview

    The Microsoft Outlook Social Connector (OSC) provides a communication hub for personal and professional communications. Just by selecting an Outlook item such as an email or meeting request and clicking the sender or a recipient of that item, users can see, in the People Pane, activities, photos, and status updates for the person on their favorite social networks.

    The OSC obtains social network data by calling an OSC provider, which behaves like a translation layer between Outlook and the social network. The OSC provider model is open, and you can develop a custom OSC provider by implementing the required OSC provider extensibility interfaces. To retrieve social network data, the OSC makes calls to the OSC provider through these interface members. The OSC provider communicates with the social network and returns the social network data to the OSC as a string or as XML that conforms to the Outlook Social Connector XML schema. Figure 1 shows the various components of the sample OfficeTalk OSC provider reviewed in this Visual How To.

    Figure 1. Relationships of the sample OfficeTalk OSC provider with related components

    Relationship of sample provider with components

    This Visual How To shows the procedures to create a custom OSC provider for OfficeTalk. OfficeTalk is not publicly available and is being used as an example of the kind of social network you might want to develop a custom OSC provider for. You can use the procedures for creating the OSC provider for OfficeTalk to create a custom OSC provider for any social network.

    The OfficeTalk provider uses the Outlook Social Connector Provider Proxy Library to simplify the implementation of the OSC provider extensibility interfaces. The OSC Provider Proxy Library implements all of the OSC provider extensibility interface members. These interface members, in turn, call a consolidated set of abstract and virtual methods that provide the social network data that the OSC requires. To create a custom OSC provider that uses the OSC Provider Proxy Library, a developer overrides these abstract and virtual methods with the business logic to communicate with the social network.

    Code It

    The sample solution for this article includes all of the code for a custom OSC provider for OfficeTalk. However, this Visual How To does not show all of the code in the sample solution. Instead, it focuses on creating a custom OSC provider by using the OSC Provider Proxy Library.

    The sample solution contains two projects:

    • OSCProvider—This project is an unmodified version of the OSC Provider Proxy Library that is used to simplify the creation of the OfficeTalk OSC provider.
    • OfficeTalkOSCProvider—This project includes the source code files that are specific to the OfficeTalk OSC provider.

    The OfficeTalkOSCProvider project includes the following source code files:

    • OfficeTalkHelper—This class contains helper methods that are used throughout the sample solution.
    • OTProvider—This is a partial class that contains the OSC Provider Proxy Library override methods that return information about the OSC provider, information about the social network, and information for the current user.
    • OTProvider_Activities—This is a partial class that contains the OSC Provider Proxy Library override methods that return activity information.
    • OTProvider_Friends—This is a partial class that contains the OSC Provider Proxy Library override methods that return friends information.

    Creating the OfficeTalk OSC Provider Solution

    The following sections show the procedures to create the OfficeTalk OSC provider sample solution, and add OSC Provider Proxy Library override methods to return information about the OSC provider, the social network, and the current user.

    You must create the OSC provider as a class library. For this Visual How To, the solution was created with a name of OfficeTalkOSCProvider.

    Adding the OSC Provider Proxy Library Project

    You must download the Outlook Social Connector Provider Proxy Library from MSDN Code Gallery, and then extract it to the local computer.

    To add the OSC Provider Proxy Library to the OfficeTalkOSCProvider solution

    1. Copy the OSCProvider project to the OfficeTalkOSCProvider directory.
    2. On the File menu in Visual Studio 2010, point to Add, and then click Existing Project.
    3. Select the OSCPRovider.csproj project that you copied in Step 1.

    Adding References

    Add the following references to the OfficeTalkOSCProvider:

    • Outlook Social Provider COM component. The name in the COM tab is Microsoft Outlook Social Provider Extensibility. If there are multiple versions, select TypeLib Version 1.1.
    • System.Drawing

    Adding Social Network Specific References and Files

    Add other appropriate references and files for the social network. The sample solution does not include the OfficeTalk API assembly. To support the social network for which you are developing an OSC provider, replace the OfficeTalk API references and files with the references and files that are specific to your social network.

    The sample solution for OfficeTalk contains the following references and files:

    • The OfficeTalk API assembly.
    • The OfficeTalk icon file.

    Creating a Subclass of the OSC Provider Proxy Library OSCProvider

    Use the OSC Provider Proxy Library to create a subclass of the OSCProvider class, OTProvider, which represents the sample OSC provider. Add a class named OTProvider to the OfficeTalkOSCProvider project. OTProvider is defined as a partial class so that logic for OSC provider core methods, friends, and activities can be defined in separate source code files.

    Replace the class definition with the code in the following section. The code example starts with the using statements for the OSC Provider Proxy Library and OfficeTalk API. The OTProvider partial class then inherits from the OSCProvider class. Note that the OTProvider class has the ComVisible attribute so that the Outlook Social Connector can call it.

    Copy
    using System;
    using System.Globalization;
    using System.Collections.Generic;
    using System.IO;
    using System.Reflection;
    using System.Drawing;
    using System.Drawing.Imaging;
    
    // Using statements for the OSC Provider Proxy Library.
    using OSCProvider;
    using OSCProvider.Schema;
    
    // Using statements for the social network.
    using OfficeTalkAPI;
    
    namespace OfficeTalkOSCProvider
    {
        // SubClass of the OSC Provider Proxy Library OSCProvider
        // used to create a custom OSC provider.
        [System.Runtime.InteropServices.ComVisible(true)]
        public partial class OTProvider : OSCProvider.OSCProvider
        {
        ...
    
    

    After the OTProvider class is defined, add the following code for constants used throughout the OfficeTalkOSCProvider solution.

    Copy
    // Constants for the OfficeTalk OSC provider.
    internal static string NETWORK_NAME = @"OfficeTalk";
    internal static string NETWORK_GUID = @"YourNetworkGuid";
    internal static string API_VERSION = @"YourApiVersion";
    internal static string API_URL = @"YourApiUrl";
    internal static OSCProvider.ProviderSchemaVersion SCHEMA_VERSION =
        ProviderSchemaVersion.v1_1;
    
    

    Allowing for Debugging

    To debug the OfficeTalkOSCProvider, you must modify the OfficeTalkOSCProvider project to start using Outlook and register the OfficeTalkOSCProvider as an Outlook Social Connector.

    To set up the OfficeTalkOSCProvider project for debugging

    1. Right-click the OfficeTalkOSCProvider project, and then click Properties.
    2. Select the Debug tab.
    3. Under Start Action, select Start External Program.
    4. Specify the full path to the version of Outlook that is installed on your computer. The default path for 32-bit Outlook on 32-bit Windows is C:\Program Files\Microsoft Office\Office14\OUTLOOK.EXE.

    The Outlook Social Connector will not call the OfficeTalkOSCProvider until it is registered as an OSC provider. The sample solution includes a file named RegisterProvider.reg that updates the registry with the entries that are required to register the OfficeTalkOSCProvider as an OSC provider. You can update the registry by opening the RegistryProvider.reg file in Windows Explorer.

    The RegisterProvider.reg file assumes that the sample solution is located in the C:\temp directory. If the sample solution is located in a different directory, update the CodeBase entry in the RegisterProvider.reg file to point to the correct location.

    Adding Helper Methods

    The OfficeTalkHelper class contains helper methods, including the GetOfficeTalkClient and ConvertUserToPerson methods, that are used throughout the sample solution.

    The following GetOfficeTalkClient method returns an OfficeTalkClient object that is used to communicate with OfficeTalk. If the OfficeTalkClient has not been initialized, GetOfficeTalkClient creates and configures a new OfficeTalkClient by using the API_URL and API_VERSION constants that are defined in OTProvider.

    Copy
    // Returns a reference to the OfficeTalk client.
    private static OfficeTalkClient officeTalkClient = null;
    internal static OfficeTalkClient GetOfficeTalkClient()
    {
        if (officeTalkClient == null)
        {
            officeTalkClient =
              new OfficeTalkClient(OTProvider.API_URL);
            OfficeTalkClient.UserAgent =
              @"OfficeTalkOSC/" + OTProvider.API_VERSION;
        }
        return officeTalkClient;
    }
    
    

    The ConvertUserToPerson method converts an OfficeTalk User object to an OSC Provider Proxy Library Person object that is usable within the OSC Provider Proxy Library. The ConvertUserToPerson method creates a new OSC Provider Proxy Library Person and then maps the User properties to the related Person properties.

    Copy
    // Converts an Office Talk User to an OSC Provider Proxy Library Person.
    internal static Person ConvertUserToPerson(OfficeTalkAPI.OTUser user)
    {
        // Create the OSC Provider Proxy Library Person.
        Person person = new Person();
    
        // Map the User properties to the Person properties.
        person.FullName = user.name;
        person.Email = user.email;
        person.Company = user.department;
        person.UserID = user.id.ToString(CultureInfo.InvariantCulture);
        person.Title = user.title;
        person.CreationTime = user.created_atAsDateTime;
    
        // FriendStatus is based on whether the user is being followed 
        // by the currently logged-on user.
        person.FriendStatus = 
            user.following ? FriendStatus.friend : FriendStatus.notfriend;
    
        // Set the PictureUrl if a profile picture is loaded in OfficeTalk.
        if (user.image_url != null)
        {
            person.PictureUrl = new Uri(OTProvider.API_URL + user.image_url);
        }
    
        // WebProfilePage is set to the user's home page in OfficeTalk.
        person.WebProfilePage = 
            OTProvider.API_URL + @"/Home/index/" + user.alias + "#User";
    
        return person;
    }
    
    

    Overriding the GetProviderData Method

    The OSC ISocialProvider interface contains members that return information about the OSC provider. This includes the capabilities of the social network, how to communicate with the social network, and general information about the social network. The OSC Provider Proxy Library provides the GetProviderData abstract method, which you can override to return OSC provider information. The GetProviderData abstract method returns the OSC Provider Proxy Library ProviderData object, which encapsulates the provider information.

    The following section of the GetProviderData override method initializes a ProviderData object and sets the properties for the OfficeTalk provider.

    Copy
    // The ProviderData contains information about the social network and is 
    // used by the OSC ISocialProvider members to return information.
    ProviderData providerData = new ProviderData();
    
    // Friendly name of the social network to display in Outlook.
    providerData.NetworkName = NETWORK_NAME;
    
    // GUID that represents the social network.
    // This GUID should not change between versions.
    providerData.NetworkGuid = new Guid(NETWORK_GUID);
    
    // Version of the social network provider.
    providerData.Version = API_VERSION;
    
    // Array of URLs that the social network provider uses.
    // The default URL should be the first item in the array.
    providerData.Urls = new string[] { API_URL };
    
    // The icon of the social network to display in Outlook.
    Byte[] icon = null;
    Assembly assembly = Assembly.GetExecutingAssembly();
    using (Stream imageStream =
        assembly.GetManifestResourceStream("OfficeTalkOSCProvider.OTIcon16.bmp"))
    {
        using (MemoryStream memoryStream = new MemoryStream())
        {
            using (Image socialNetworkIcon = Image.FromStream(imageStream))
            {
                socialNetworkIcon.Save(memoryStream, ImageFormat.Bmp);
                icon = memoryStream.ToArray();
            }
        }
    }
    providerData.Icon = icon;
    
    

    The following section of the GetProviderData override method uses the Proxy Library Capabilities class to identify the capabilities and requirements for the OfficeTalk OSC provider. The Capabilities class defines capabilities by setting the CapabilityFlags property. The CapabiltiesFlag property uses a bitmask and is set by using the bitwise OR operator to combine constants that the OSC Provider Proxy Library has defined for each capability.

    Copy
    // Define the capabilities for the provider.
    // The Capabilities object will generate the appropriate XML string.
    Capabilities capabilities = new Capabilities(SCHEMA_VERSION);
    capabilities.CapabilityFlags =
        // OSC should call the GetAutoConfiguredSession method to get a 
        // configured session for the user.
        Capabilities.CAP_SUPPORTSAUTOCONFIGURE |
    
        // OSC should hide all links in the Account configuration dialog box.
        Capabilities.CAP_HIDEHYPERLINKS |
        Capabilities.CAP_HIDEREMEMBERMYPASSWORD |
    
        // The following activity settings identify that Activities uses
        // hybrid synchronization.
        // OSC will store activities for friends in a hidden folder and 
        // activities for non-friends in memory.
        Capabilities.CAP_GETACTIVITIES |
        Capabilities.CAP_DYNAMICACTIVITIESLOOKUP |
        Capabilities.CAP_DYNAMICACTIVITIESLOOKUPEX |
        Capabilities.CAP_CACHEACTIVITIES |
    
        // The following Friends settings identify that friend information
        // uses hybrid synchronization.
        // OSC will call the GetPeopleDetails method every time the People Pane 
        // is refreshed to ensure the latest user information is displayed.
        Capabilities.CAP_GETFRIENDS |
        Capabilities.CAP_DYNAMICCONTACTSLOOKUP |
        Capabilities.CAP_CACHEFRIENDS |
    
        // The following Friends settings identify that OfficeTalks supports
        // the FollowPerson and UnFollowPerson calls.
        Capabilities.CAP_DONOTFOLLOWPERSON |
        Capabilities.CAP_FOLLOWPERSON;
    
    // Set the email HashFunction.
    // Setting the EmailHashFunction is required if CAP_DYNAMICCONTACTSLOOKUP
    // or CAP_DYNAMICACTIVITIESLOOKUPEX are set.
    capabilities.EmailHashFunction = HashFunction.SHA1;
    
    // Set the capabilities property on the providerData object.
    providerData.ProviderCapabilities = capabilities;
    
    

    The capabilities and requirements defined in the preceding code example are specific to OfficeTalk. A custom OSC provider that is developed for a different social network must define a set of capabilities and requirements that are specific to that social network.

    The following list shows the CapabilityFlag constants that are available in the OSC Provider Proxy Library Capabilities class.

    CAP_SUPPORTSAUTOCONFIGURE
    The provider supports calling the ISocialProvider.GetAutoConfiguredSession method to attempt automatic configuration of the network for the user.
    CAP_GETFRIENDS
    The provider supports the ISocialPerson.GetFriendsAndColleagues or ISocialSession2.GetPeopleDetails method. The OSC uses the CAP_CACHEFRIENDS and CAP_DYNAMICCONTACTSLOOKUP settings to determine whether friends are stored as Outlook contact items or are stored in memory.
    CAP_CACHEFRIENDS
    The provider supports storing friends as Outlook contact items in a social-network-specific contacts folder.
    CAP_DYNAMICCONTACTSLOOKUP
    The provider supports the ISocialSession2.GetPeopleDetails method for on-demand synchronization of friends and non-friends. If CAP_DYNAMICCONTACTSLOOKUP is set, the OSC calls the ISocialSession2.GetPeopleDetails method every time the People Pane is refreshed.
    CAP_SHOWONDEMANDCONTACTSWHENMINIMIZED
    Indicates that the OSC should carry out on-demand synchronization for friends and non-friends when the People Pane is minimized.
    CAP_FOLLOWPERSON
    The provider supports the ISocialSession.FollowPerson method for adding the person as a friend on the social network.
    CAP_DONOTFOLLOWPERSON
    The provider supports the ISocialSession.UnFollowPerson method for removing the person as a friend on the social network.
    CAP_GETACTIVITIES
    The provider supports the ISocialPerson.GetActivities or ISocialSession2.GetActivitiesEx method. The OSC uses the CAP_CACHEACTIVITIES and CAP_DYNAMICACTIVITIESLOOKUPEX settings to determine whether activities are stored as Outlook RSS items or are stored in memory.
    CAP_CACHEACTIVITIES
    The provider supports storing activities as Outlook RSS items in a hidden News Feed folder. To support cached synchronization of activities CAP_CACHEACTIVITIES should be set and CAP_DYNAMICACTIVITIESLOOKUPEX should not be set. With cached synchronization of activities, the OSC stores all activities as Outlook RSS items in a hidden News Feed folder. To support hybrid synchronization of activities, both CAP_CACHEACTIVITIES and CAP_DYNAMICACTIVITIESLOOKUPEX should be set. With hybrid synchronization of activities, the OSC stores activities for friends as Outlook RSS items in a hidden News Feed folder and caches activities for non-friends in memory. To support on-demand synchronization of activities, CAP_CACHEACTIVITIES should not be set and CAP_DYNAMICACTIVITIESLOOKUPEX should be set. With on-demand synchronization of activities, the OSC caches all activities in memory.
    CAP_DYNAMICACTIVITIESLOOKUP
    Deprecated in OSC 1.1. Use the CAP_DYNAMICACTIVITIESLOOKUPEX setting instead.
    CAP_DYNAMICACTIVITIESLOOKUPEX
    The provider supports the ISocialSession2.GetActivitiesEx method for on-demand or hybrid synchronization of activities. To support on-demand synchronization of activities, CAP_DYNAMICACTIVITIESLOOKUPEX should be set and CAP_CACHEACTIVITIES should not be set. With on-demand synchronization of activities, the OSC calls ISocialSession2.GetActivitiesEx every time the People Pane is refreshed. To support hybrid synchronization of activities, both CAP_DYNAMICACTIVITIESLOOKUPEX and CAP_CACHEACTIVITIES should be set. With hybrid synchronization of activities, the OSC calls ISocialSession2.GetActivitiesEx every 30 minutes to refresh activities information. When CAP_DYNAMICACTIVITIESLOOKUPEX is not set, the OSC does not call ISocialSession2.GetActivitiesEx.
    CAP_SHOWONDEMANDACTIVITIESWHENMINIMIZED
    Indicates that the OSC should carry out on-demand synchronization for activities when the People Pane is minimized.
    CAP_DISPLAYURL
    Indicates that the OSC should display the network URL in the account configuration dialog box.
    CAP_HIDEHYPERLINKS
    Indicates that the OSC should hide the “Click here to create an account” and the “Forgot your password?” hyperlinks in the account configuration dialog box.
    CAP_HIDEREMEMBERMYPASSWORD
    Indicates that the OSC should hide the Remember my password check box in the account configuration dialog box.
    CAP_USELOGONWEBAUTH
    Indicates that the OSC should use forms-based authentication. When CAP_USELOGONWEBAUTH is set, the OSC uses forms-based authentication and calls the ISocialSession.LogonWeb method. When CAP_USELOGONWEBAUTH is not set, the OSC uses basic authentication and calls the ISocialSession.Logon method.
    CAP_USELOGONCACHED
    The provider supports the ISocialSession2.LogonCached method to log on with cached credentials. When CAP_USELOGONCACHED is set, the OSC ignores the CAP_USELOGONWEBAUTH setting and calls ISocialSession2.LogonCached for authentication.

    Overriding the GetMe Method

    Many of the OSC interface members and OSC Provider Proxy Library override methods require information about the current user. The OSC Provider Proxy Library provides the GetMe abstract method, which you can override to return information about the current user from the social network. The GetMe abstract method returns a Person object, which contains all social network data for the current user.

    The GetMe override method shown in the following example gets an OfficeTalkClient object to communicate with OfficeTalk. The GetMe override method then calls the OfficeTalk GetUser method by using the user name that is used to log on to Windows. After obtaining the OfficeTalk User, the GetMe override method calls the OfficeTalkHelper ConvertUserToPerson method to convert the OfficeTalk User to a Person that can be used within the OSC Provider Proxy Library.

    After the conversion is complete, the GetMe override method sets the Person.UserName property for the ISocialSession.LoggedOnUserName interface member. Only the GetMe override method sets the Person.UserName property when it returns information about the current user.

    Copy
    // OSC Proxy Library override method used to return information 
    // for the current user.
    public override Person GetMe()
    {
        // Get a reference to the OfficeTalk client.
        OfficeTalkClient officeTalkClient =
            OfficeTalkHelper.GetOfficeTalkClient();
    
        // Look up the user based on credentials used to log on to Windows.
        OTUser user =
            officeTalkClient.GetUser(System.Environment.UserName, Format.JSON);
    
        // Convert the OfficeTalk User to an OSC Provider Proxy Person.
        Person p = OfficeTalkHelper.ConvertUserToPerson(user);
    
        // Set the UserName property.
        // This is used only by the Person that the GetMe method returns to
        // support the OSC ISocialSession.LoggedOnUserName property.
        p.UserName = System.Environment.UserName;
    
        return p;
    }
    
    

    Overriding OSC Provider Proxy Library Friends Methods

    A custom OSC provider that uses the OSC Provider Proxy Library must override the abstract and virtual methods for returning friends social network data. In the sample solution, the overrides for these OTProvider methods are located in the OTProvider_Friends source file.

    The abstract and virtual methods for friends are as follows:

    • GetPeopleDetails—Returns detailed user information for the email addresses that are passed into the method.
    • GetFriends—Returns a list of friends for the current user.
    • FollowPersonEx—Adds the person who is identified by the email address as a friend on the social network.
    • UnFollowPerson—Removes the person who is identified by the user ID as a friend on the social network.

    Reviewing these methods is outside of the scope of this Visual How To. For more information about returning friends social network data, see Part 2: Getting Friends Information by Using the Proxy Library for Outlook Social Connector Provider Extensibility.

    Overriding OSC Provider Proxy Library Activity Methods

    A custom OSC provider that uses the OSC Provider Proxy Library must override the abstract and virtual methods for returning activity social network data. In the sample solution, the overrides for these OTProvider methods are located in the OTProvider_Activities source file.

    There is only one method to override for activities:

    • GetActivities—Returns activities for all users who are identified by the email addresses that are passed into the method.

    Covering these methods in detail is outside of the scope of this Visual How To. For more information about returning activities social network data, see Part 3: Getting Activities Information by Using the Proxy Library for Outlook Social Connector Provider Extensibility Visual How To.

    Read It

    Creating a custom Outlook Social Connector (OSC) provider for a social network is a straightforward process of implementing the OSC Provider extensibility interfaces to return social network data.

    The OSC Provider Proxy Library simplifies this process by removing the requirement to implement each individual interface member. Instead the OSC Provider Proxy Library defines a consolidated set of abstract and virtual methods to provide social network data. The developer of the OSC provider can focus on overriding these methods with the business logic required to interface with the social network API.

    The sample solution for this article includes all of the code required for a custom OSC provider for OfficeTalk. This Visual How To does not cover all of the code in the sample solution. This Visual How To focuses on creating a custom OSC provider solution, and returning information about the OSC provider, the social network capabilities, and the current user. The social network data that the OfficeTalk provider returns is shown in Figure 2.

    Figure 2. OSC showing OfficeTalk social network data in the People Pane

    OfficeTalk social network data in the People Pane

    For more information about returning friends social network data, see Part 2: Getting Friends Information by Using the Proxy Library for Outlook Social Connector Provider Extensibility.

    For more information about returning activities social network data, see Part 3: Getting Activities Information by Using the Proxy Library for Outlook Social Connector Provider Extensibility.

    Thoughts on : Customizing the Public Website of Office 365

    image_thumb

    blog-office365

     

    Recently, I attempted a migration from my ASP.NET based Azure website to Office 365. The reason was that I wanted to use SharePoint 2013 for in-page editing and simply try to get the platform to take care of all my business needs.

    After a few days, I have reverted back to the Azure web host as I am not satisfied that the service will fulfill my requirements. Here is a recollection of my experiences of the shortcomings in the platform and the points that should be addressed.

    Master page editing in the public Office 365 site is not much different from the rest of Office 365 and SharePoint 2013. You have access to the Design Manager and you can open the site with SharePoint Designer.

    lekman-365

    On the up-side, you can create master pages, create page layouts and add Rich Text areas using the “Multi-Area Page” that allows up to four separate rich text areas. I managed to get the site to look virtually the same when published.

    On the down-side, the page contained all the scripts and CSS styles from standard SharePoint and caused the responsive design to break for tablets and phones. I could probably have fixed some of the issues but the difference in page and load time is as follows:

    Azure .NET Office 365
    Total page weight 305.2K 727K
    Total non-cached file size 7.2K 54K
    Total number of script files 7 12
    Average page load time during load test 1.67 sec 3.46s

    I then amended the blog layout. The comments feature from blogs in standard SharePoint is not available so it uses Facebook instead. I replaced this with a Disqus control instead. Later on, I started running in to several issues when trying to add features.

    Issue #1: You cannot define your own content types

    The site administration does not contain a link to allow modification of content types or site fields. Trying to navigate to the URL manually presents you with a 403 error. Adding custom content types for your page layouts seems like a simple request. I then tried to inject these using sandbox solutions.

    Issue # 2: Sandboxed solutions are not supported

    Yes, this link is also gone. You cannot navigate to “Solutions” but you can manually enter the URL. I found a helpful and informative post by Jason Cribbet on the topic and was able to activate my feature. This is, however, not supported by Microsoft and I am now in “not supported” land with my website.

    Issue # 3: You cannot create subsites

    I was fairly happy until I started to create more content and restricted areas. There is no way to create subsites using the interface. You need to use SharePoint Designer. Again, this is not supported by Microsoft.

    Issue # 4: You cannot control feature activation

    Yes, features can not be changed either. This means that you cannot add or remove any functionality outside of apps to the site.

    Issue # 5: What is going on with the blog framework and managed navigation?

    I could live with the “hacks” and continued to style the blog area. This, in itself, has a number of very strange issues:

    • If you remove the “Blog Tools” web part from the page then the links to blog posts will not work.
    • The pages does not seem to understand changing page layouts. I first had to change the page layout, then disconnect the page from the layout in SharePoint Designer.
    • Managed navigation allows you to use the blog as “/Blog/Post/1/My-Blog-Title” and “/Blog/Date/2013/” etc. The page configuration, however, does not allow to be changed. If you rename a page then the entire navigation framework will stop working. Just don’t.
    • The blog and blog category lists can still be accessed using the forms URL at “/Lists/Posts/AllItems.aspx” and you cannot change the anonymous behavior. As you cannot change features, then the lockdown feature is out of bounds. I guess you can inject redirects on the pages or try to use PowerShell to reactivate the forms lockdown page feature but I did not attempt this.

    Issue # 6: You cannot recreate the site

    So finally, you have hacked this puppy to pieces. You want to recreate the site, you go into SharePoint administration for Office 365 and delete the site collection. But wait… there is no option to recreate the site? This rectified itself on my test tenant after 24 hours and allowed me to create the public site. It did, however, not fully recreate. Now the site has no web template applied and I get the error message “Sorry, something went wrong: There is no site in the current site subscription matching the HiddenSiteSelection control’s value.”.

    Summary

    Office 365 has a long way to go before it can offer any kind of enterprise solutions for public web. And in a sense, it seems that they are just about there but have intentionally limited themselves to support basic usage only. But if that was the case, why allow SharePoint Designer and Design Manager access at all?

    I hope that the public website will be improved in upcoming releases and would really like to run my site and blog using SharePoint technology.

    When should I choose to create a mail app versus an add-in for Outlook?

    When should I choose to create a mail app versus an add-in for Outlook?

    Rate This
    PoorPoorFairFairAverageAverageGoodGoodExcellentExcellent

    Some of you may or may not be aware that alongside with the legacy COM-based Office client object models, Office 2013 supports a new apps for Office developer platform. This blog post is intended to help new and existing Office developers understand the main differences between the COM-based object models and the apps for Office platform. In particular, this post focuses on Outlook, suggests why you should consider developing solutions as mail apps, and identifies those exceptional scenarios where add-ins may still be the more appropriate choice.

    Contents:

    An introduction to the apps for Office platform

    Architectural differences between add-in model and apps for Office platform

    Main features available to mail apps

    Major objects for mail apps

    Reasons to create mail apps instead of add-ins for Outlook

    Reasons to choose add-ins

    Conclusion

    Further references

    An introduction to the apps for Office platform

    The apps for Office platform includes a JavaScript API for Office and a schema for apps for Office manifests. You can use this platform to extend web services and content into the context of rich and web clients of Office. An app for Office is a webpage that is developed using common web technologies, hosted inside an Office client application (such as Outlook) on-premises or in the cloud. Of the three types of apps for Office, the type that Outlook supports is called mail apps. While you use the legacy APIs—the object model, PIA, and MAPI—to automate Outlook at an application level, you can use the JavaScript API for Office in a mail app to interact at an item level with the content and properties of an email message, meeting request, or appointment. You can publish mail apps in the Office Store or in an internal Exchange catalog. End users and administrators can install mail apps for an Exchange 2013 mailbox, and use mail apps in the Outlook rich client as well as Outlook Web App. As a developer, you can choose to make your mail app available for end users on only the desktop, or also on the tablet or smart phone. You can find more information about the apps for Office platform by starting here: Overview of apps for Office.

    Architectural differences between add-in model and apps for Office platform

    Add-in model

    The Office add-in model offers individual object models for most of the Office rich clients. Each object model is intended to automate the corresponding Office client, and allows an add-in to integrate closely with the behavior of that client. The same add-in can integrate with one or multiple Office applications, such as Outlook, Word, and Excel, by calling into each of the Outlook, Word, and Excel object models. Figure 1 describes a few examples of 1:1 relationships between an Office rich client and its object model.

    Figure 1. The legacy Office development architecture is composed of individual client object models.

     

    Apps for Office platform

    The apps for Office platform includes an apps for Office schema. Using this schema, each app specifies a manifest that describes the permissions it requests, its requirements for its host applications (for example, a mail app requires the host to support the mailbox capability), its support for the default and any extra locales, display details for one or more form factors, and activation rules for a mail app to be available in the app bar.

    In addition to the schema, the apps for Office platform includes the JavaScript API for Office. This API spans across all supporting Office clients and allows apps to move toward a single code base. Rather than automating or extending a particular Office client at the application level, the apps for Office platform allows apps to connect to services and extend them into the context of a document, message, or appointment item in a rich or web client. Figure 2 shows Office applications with their rich and web clients sharing a common app platform.

    Figure 2. The apps for Office development architecture is composed of a common platform and individual object models.

     

    One main difference of note is that the object models were designed to integrate tightly with the corresponding Office client applications. However, this tight integration has a side effect of requiring an add-in to run in the same process as the rich client. The reliability and performance of an add-in often affects the perceived performance of the rich client. Unlike client add-ins, an app for Office doesn’t integrate as tightly with the host application, does not share the same process as the rich client, and instead runs in its own isolated runtime environment. This environment offers a privacy and permission model that allows users and IT administrators to monitor their ecosystem of apps and enjoy enhanced security.

    Main features available to mail apps

    Contextual activation: Mail app activation is contextual, based on the app’s activation rules and current circumstances, including the item that is currently displayed in the Reading Pane or inspector. A mail app is activated and becomes available to end users when such circumstances satisfy the activation rules in the app manifest.

    Matching known entities or regular expression: A mail app can specify certain entities (such as a phone number or address) or regular expressions in its activation rules. If a match for entities or regular expressions occurs in the item’s subject or body, the mail app can access the match for further processing.

    Roaming settings: A mail app can save data that is specific to Outlook and the user’s Exchange mailbox for access in a subsequent Outlook session.

    Accessing item properties: A mail app can access built-in properties of the current item, such as the sender, recipients, and subject of a message, or the location, start, end, organizer, and attendees of a meeting request.

    Creating item-level custom properties: A mail app can save item-specific data in the user’s Exchange mailbox for access in a subsequent Outlook session.

    Accessing user profile: A mail app can access the display name, email address, and time zone in the user’s profile.

    Authentication by identity tokens: A mail app can authenticate a user by using a token that identifies the user’s email account on an Exchange Server.

    Using Exchange Web Services: A mail app can perform more complex operations or get further data about an item through Exchange Web Services.

    Permissions model and governance: Mail apps support a three-tier permissions model. This model provides the basis for privacy and security for end users of mail apps.

    Major objects for mail apps

    For mail apps, you can look at the JavaScript API for Office object model in three layers:

    1. In the first layer, there are a few objects shared by all three types of apps for Office: Office, Context, and AsyncResult.
    2. The second layer in the API that is applicable and specific to mail apps includes the Mailbox, Item, and UserProfile objects, which support accessing information about the user and the item currently selected in the user’s mailbox.
    3. The third layer describes the data-level support for mail apps:
      1. There are CustomProperties and RoamingSettings that support persisting properties set up by the mail app for the selected item and for the user’s mailbox, respectively.
      2. There are the supported item objects, Appointment and Message, that inherit from Item, and the MeetingRequest object that inherits from Message. These objects represent the types of Outlook items that support mail apps: calendar items of appointments and meetings, and message items such as email messages, meeting requests, responses, and cancellations.
      3. Then there are the item-level properties (such as Appointment.subject) as well as objects and properties that support certain known Entities objects (for example Contact, MeetingSuggestion, PhoneNumber, and TaskSuggestion).

    Figure 3 shows the major objects: Mailbox, Item, UserProfile, Appointment, Message, Entities, and their members.

    Figure 3. Major objects and their members used by mail apps in the JavaScript API for Office.

    Figure 4 shows all of the objects and enumerations in the JavaScript API for Office that pertain to mail apps.

    Figure 4. All objects for mail apps in the JavaScript API for Office.

    Figure 5 is a thumbnail of a diagram with all the objects and members that mail apps use. Zoom into the diagram at http://go.microsoft.com/fwlink/?LinkId=317271.

    Figure 5. All objects and members used by mail apps in the JavaScript API for Office.

    The following are common reasons why mail apps are a better choice for developers than add-ins:

    • You can use existing knowledge of and the benefits of web technologies such as HTML, JavaScript, and CSS. For power users and new developers, XML, HTML, and JavaScript require less significant ramp-up time than COM-based APIs such as the Outlook object      model.
    • You can use a simple web deployment model to update your mail app (including the web services that the app uses) on your web server without any complex installation on the Outlook client. In fact, any updates to the mail app, with the exception of the app manifest, do not require any updating on the Office client. You can update the code or user interface of the mail app conveniently just on the web server. This presents a significant advantage over the administrative overhead involved in updating add-ins.
    • You can use a common web development platform for mail apps that can roam across the Outlook rich client and Outlook Web App on the desktop, tablet, and smartphone. On the other hand, add-ins use the object model for the Outlook rich client and, hence, can run on only that rich client on a desktop form factor.
    • You can enjoy rapid turnaround of building and releasing apps via the Office Store.
    • Because of the three-tier permissions model, users and administrators can appreciate better security and privacy in mail apps than add-ins, which have full access to the content of each account in the user’s profile. This, in turn, encourages user consumption of apps.
    • Depending on your scenarios, there are features unique to mail apps that you can take advantage of and that are not supported by add-ins:
      • You can specify a mail app to activate only for certain contexts (for example, Outlook displays the app in the app bar only if the message class of the user-selected appointment is IPM.Appointment.Contoso, or if the body of an email contains a package       tracking number or a customer identifier).
      • You can activate a mail app if the selected message contains some known entities, such as an address, contact, email address, meeting suggestion, or task suggestion.
      • You can take advantage of authentication by identity tokens and of Exchange Web Services.

    Reasons to choose add-ins

    The following features are unique to add-ins and may make them a more appropriate choice than mail apps in some circumstances:

    • You can use add-ins to extend or automate Outlook at an application-level, because the object model and PIA have extensive integration with Outlook features (such as all Outlook item types, user interface, sessions, and rules). At the item-level, add-ins can interact with an item in read or compose mode. With mail apps, you cannot automate Outlook at the application level, and you can extend Outlook’s functionality in the context of only the read-mode of the supported items (messages and appointments) in the user’s mailbox.
    • You can specify custom business logic for a new item type.
    • You can modify and add custom commands in the ribbon and Backstage view.
    • You can display a custom form page or form region.
    • You can detect events such as sending an item or modifying properties of an item.
    • You can use add-ins on Outlook 2013 and Exchange Server 2013, as well as earlier versions of Outlook and Exchange. On the other hand, mail apps work with Outlook and Exchange starting in Outlook 2013 and Exchange Server 2013, but not earlier versions.

    Conclusion

    When you are considering creating a solution for Outlook, first verify whether the supported major features and objects of the apps for Office platform meet your needs. Develop your solution as a mail app, if possible, to take advantage of the platform’s support across Outlook clients over the desktop, tablet, and smartphone form factors. Note that there are still some circumstances where add-ins are more appropriate, and you should prioritize the goals of your solution before making a decision.

    Further references

    Apps for Office and mail apps

    Using SharePoint FAST to unlock SAP data and make it accesible to your entire business

    An important new mantra is search-driven applications. In fact, “search” is the new way of navigating through your information. In many organizations an important part of the business data is stored in SAP business suites.
    4336.SP2013SearchArchitecture[1]
    A frequently asked need is to navigate through the business data stored in SAP, via a user-friendly and intuitive application context.
    For many organizations (78% according to Microsoft numbers), SharePoint is the basis for the integrated employee environment. Starting with SharePoint 2010, FAST Enterprise Search Platform (FAST ESP) is part of the SharePoint platform.
    All analyst firms assess FAST ESP as a leader in their scorecards for Enterprise Search technology. For organizations that have SAP and Microsoft SharePoint administrations in their infrastructure, the FAST search engine provides opportunities that one should not miss.

    SharePoint Search

    Search is one of the supporting pillars in SharePoint. And an extremely important one, for realizing the SharePoint proposition of an information hub plus collaboration workplace. It is essential that information you put into SharePoint, is easy to be found again.

    By yourself of course, but especially by your colleagues. However, from the context of ‘central information hub’, more is needed. You must also find and review via the SharePoint workplace the data that is administrated outside SharePoint. Examples are the business data stored in Lines-of-Business systems [SAP, Oracle, Microsoft Dynamics], but also data stored on network shares.
    With the purchase of FAST ESP, Microsoft’s search power of the SharePoint platform sharply increased. All analyst firms consider FAST, along with competitors Autonomy and Google Search Appliance as ‘best in class’ for enterprise search technology.
    For example, Gartner positioned FAST as leader in the Magic Quadrant for Enterprise Search, just above Autonomy. In SharePoint 2010 context FAST is introduced as a standalone extension to the Enterprise Edition, parallel to SharePoint Enterprise Search.
    In SharePoint 2013, Microsoft has simplified the architecture. FAST and Enterprise Search are merged, and FAST is integrated into the standard Enterprise edition and license.

    SharePoint FAST Search architecture

    The logical SharePoint FAST search architecture provides two main responsibilities:

    1. Build the search index administration: in bulk, automated index all data and information which you want to search later. Depending on environmental context, the data sources include SharePoint itself, administrative systems (SAP, Oracle, custom), file shares, …
    2. Execute Search Queries against the accumulated index-administration, and expose the search result to the user.

    In the indexation step, SharePoint FAST must thus retrieve the data from each of the linked systems. FAST Search supports this via the connector framework. There are standard connectors for (web)service invocation and for database queries. And it is supported to custom-build a .NET connector for other ways of unlocking external system, and then ‘plug-in’ this connector in the search indexation pipeline. Examples of such are connecting to SAP via RFC, or ‘quick-and-dirty’ integration access into an own internal build system.
    In this context of search (or better: find) in SAP data, SharePoint FAST supports the indexation process via Business Connectivity Services for connecting to the SAP business system from SharePoint environment and retrieve the business data. What still needs to be arranged is the runtime interoperability with the SAP landscape, authentication, authorization and monitoring.
    An option is to build these typical plumping aspects in a custom .NET connector. But this not an easy matter. And more significant, it is something that nowadays end-user organizations do no longer aim to do themselves, due the involved development and maintenance costs.
    An alternative is to apply Duet Enterprise for the plumbing aspects listed. Combined with SharePoint FAST, Duet Enterprise plays a role in 2 manners: (1) First upon content indexing, for the connectivity to the SAP system to retrieve the data.
    The SAP data is then available within the SharePoint environment (stored in the FAST index files). Search query execution next happens outside of (a link into) SAP. (2) Optional you’ll go from the SharePoint application back to SAP if the use case requires that more detail will be exposed per SAP entity selected from the search result.  An example is a situation where it is absolutely necessary to show the actual status. As with a product in warehouse, how many orders have been placed?

    Security trimmed: Applying the SAP permissions on the data

    Duet Enterprise retrieves data under the SAP account of the individual SharePoint user. This ensures that also from the SharePoint application you can only view those SAP data entities whereto you have the rights according the SAP authorization model. The retrieval of detail data is thus only allowed if you are in the SAP system itself allowed to see that data.

    Due the FAST architecture, matters are different with search query execution. I mentioned that the SAP data is then already brought into the SharePoint context, there is no runtime link necessary into SAP system to execute the query. Consequence is that the Duet Enterprise is in this context not by default applied.
    In many cases this is fine (for instance in the customer example described below), in other cases it is absolutely mandatory to respect also on moment of query execution the specific SAP permissions.
    The FAST search architecture provides support for this by enabling you to augment the indexed SAP data with the SAP autorisations as metadata.
    To do this, you extend the scope of the FAST indexing process with retrieval of SAP permissions per data entity. This meta information is used for compiling ACL lists per data entity. FAST query execution processes this ACL meta-information, and checks each item in the search result whether it allowed to expose to this SharePoint [SAP] user.
    This approach of assembling the ACL information is a static timestamp of the SAP authorizations at the time of executing the FAST indexing process. In case the SAP authorizations are dynamic, this is not sufficient.
    For such situation it is required that at the time of FAST query execution, it can dynamically retrieve the SAP authorizations that then apply. The FAST framework offers an option to achieve this. It does require custom code, but this is next plugged in the standard FAST processing pipeline.
    SharePoint FAST combined with Duet Enterprise so provides standard support and multiple options for implementing SAP security trimming. And in the typical cases the standard support is sufficient.

    lip_image002_2.png

    Applied in customer situation

    The above is not only theory, we actually applied it in real practice. The context was that of opening up of SAP Enterprise Learning functionality to operation by the employees from their familiar SharePoint-based intranet. One of the use cases is that the employee searches in the course catalog for a suitable training.

    This is a striking example of search-driven application. You want a classified list of available courses, through refinement zoom to relevant training, and per applied classification and refinement see how much trainings are available. And of course you also always want the ability to freely search in the complete texts of the courses.
    In the solution direction we make the SAP data via Duet Enterprise available for FAST indexation. Duet Enterprise here takes care of the connectivity, Single Sign-On, and the feed into SharePoint BCS. From there FAST takes over. Indexation of the exposed SAP data is done via the standard FAST index pipeline, searching and displaying the search results found via standard FAST query execution and display functionalities.
    In this application context, specific user authorization per SAP course elements does not apply. Every employee is allowed to find and review all training data. As result we could suffice with the standard application of FAST and Duet Enterprise, without the need for additional customization.

    Conclusion

    Microsoft SharePoint Enterprise Search and FAST both are a very powerful tool to make the SAP business data (and other Line of Business administrations) accessible. The rich feature set of FAST ESP thereby makes it possible to offer your employees an intuitive search-driven user experience to the SAP data.

    Now available – A SharePoint XML Indexing Connector

    Most organizations have several systems holding their data. Data from these systems must be indexable and made available for search on the common Internal Search portal.

    While most of the different data silos are able to dump or export their full dataset as XML, SharePoint does not include an OOTB general purpose XML indexing connector.

    The SharePoint Server Search Connector Framework is known to be overly complex, and documentation out there about this subject is very limited.

    There are basically two types of custom search connectors for SharePoint 2010 that can be implemented; the .Net Assembly Connector and the Custom Connector. More details about the differences between them can be found here. Mainly, a Custom Connector is agnostic of external content types, whereas each .NET Assembly Connector is specific to one external content type, and whenever the external content type changes, the .Net Assembly Connector must be re-compiled and re-deployed. If the entity model of the external system is dynamic and is large scale a Custom Connector should be considered over the .Net Assembly Connector.

    Also, a Custom Connector provides administration user interface integration, but a .NET Assembly Connector does not.

    The XML File Indexing Connector

    The XML File Indexing Connector that is presented here is a custom search indexing connector that can be used to crawl and index XML files. In this series of posts I am going to first show you how to install, setup and configure the connector. In future posts I will go into more implementation details where we’ll look into code to see how the connector is implemented and how you can customize it to suit specific needs.

    This post is divided into the following sections:

    • Installing and deploying the connector
    • Creating a new Content Source using the connector
    • Using the Start Address of the Content Source to configure the connector
    • Automatic and dynamic generation of Crawled Properties from XML elements
    • Full Crawl vs. Incremental Crawl
    • Optimizations and considerations when crawling large XML files
    • Future plans

    Installing and deploying the connector

    The package that can be downloaded at the bottom of this post, includes the following components:

    1. model.xml: This is the BCS model file for the connector
    2. XmlFileConnector.dll: This is the DLL file of the connector
    3. The Folder XmlFileConnector: This includes the Visual Studio Solution of the connector

    Follow these steps to install the connector:

    1. Install the XmlFileConnector.dll in the Global Assembly Cache on the SharePoint application server(s)

    gacutil -i “XmlFileConnector.dll”

    1. Open the SharePoint 2010 Management Shell on the application server.
    2. At the command prompt, type the following command to get a reference to your FAST Content SSA.

    $fastContentSSA = Get-SPEnterpriseSearchServiceApplication -Identity “FASTContent SSA”

    1. Add the following registry key to the application server

    [HKEY_LOCAL_MACHINE]SOFTWAREMicrosoftOfficeServer14.0SearchSetupProtocolHandlersxmldoc

    Set the value of the registry key to “OSearch14.ConnectorProtocolHandler.1”

    1. Add the new Search Crawl Custom Connector

    New-SPEnterpriseSearchCrawlCustomConnector -SearchApplication $fastContentSSA –Protocol xmldoc -Name xmldoc -ModelFilePath “XmlFileConnectorModel.xml”

    1. Restart the SharePoint Server Search 14 service. At the command prompt run:

    net stop osearch14

    net start osearch14

    8.  Create a new Crawled Property Category for the XML File Connector. Open the FAST Search Server 2010 for SharePoint Management Shell and run the following command:

    New-FASTSearchMetadataCategory -Name “Custom XML Connector” -Propset “BCC9619B-BFBD-4BD6-8E51-466F9241A27A”

     Note that the Propset GUID must be the one specified above, since this GUID is hardcoded in the Connector code as the Crawled Properties Category which will receive discovered Crawled Properties.

    Creating a new Content Source using the XML File Connector

    1. Using the Central Administration UI, on the Search Administration Page of the FAST Content SSA, click Content Sources, then New Content Source.
  • Type a name for the content source, and in Content Source Type, select Custom Repository.
  • In Type of Repository select xmldoc.

  • In Start Addresses, type the URLs for the folders that contain the XML files you want to index. The URL should be inserted in the following format:

  • xmldoc://hostname/folder_path/#x=:doc:id;;urielm=url;;titleelm=title#

    The following section describes the different parts of the Start Address.

    Using the Start Address of the Content Source to configure the connector

    The Start Address specified for the Content Source must be of the following format. The XML File Connector will read this Start Address and use them when crawling the XML content.

    xmldoc://hostname/folder_path/#x=:doc:id;;urielm=url;;titleelm=title#

    xmldoc

    xmldoc is the protocol corresponding to the registry key we added when installing the connector.

    //hostname/folder_path/

    //hostname/folder_path/ is the full path to the folder conaining the XML files to crawl.

    Exmaple: //demo2010a/c$/enwiki

    #x=doc:id;;urielm=url;;titleelm=title#

    #x=doc:id;;urielm=url;;titleelm=title# is the special part of the Start Address that is used as configuration values by the connector:

    x=:doc:id

    Defines which elements in the XML file to use as document and identifier elements. This configuration parameter is mandatory.

    For example, say a we have an XML file as follows:

    <feed> <document> <id>Some id</id> <title></title> <url>some url</id> <field1>Content for field1</field1> <field2>Content for field2</field2> </document> <document> ... </document> </feed>

    Here the value for the x configuration parameter would be x=:document:id

    urielm=url

    urielm=url defines which element in the XML file to use as the URL. This will end up as the URL of the document used by the FS4SP processing pipeline and will go into the ”url” managed property. This configuration parameter can be left out. In this case, the default URL of the document will be as follows: xmldoc://id/[id value]

    titleelm=title

    titleelm=title defines which element in the XML file to use as the Title. This will end up as the Title of the document, and the value of this element will go into the title managed property. This configuration parameter can be left out. If the parameter is left out, then the title of the document will be set to ”notitle”.

    Automatic and dynamic generation of Crawled Properties from XML elements

    The XML File Connector uses advanced BCS techniques to automatically Discover crawled properties from the content of the XML files.

    All elements in the XML docuemt will be created as crawled properties. This provides the ability to dynamically crawl any XML file, without the need to pre-define the properties of the entities in the BCS Model file, and re-deploy the model file for each change.

    This is defined in the BCS Model file on the XML Document entity. The TypeDescriptor element named DocumentProperties, defines an list of dynamic property names and values. The property names in this list will automatically be discovered by the BCS framework and corresponding crawled properties will automatically be created for each property.

    The following snippet  from the BCS Model file shows how this is configured:

      <TypeDescriptor Name="DocumentProperties" TypeName="XmlFileConnector.DocumentProperty[], XmlFileConnector, Version=1.0.0.0, Culture=neutral, PublicKeyToken=109e5afacbc0fbe2" IsCollection="true">        xxxxx          xxxx          xxxxxx              

    In addition to the ability to discover crawled properties automatically from the XML content, the XMl File Connector also creates a default property with the name “XMLContent”. This property contains the raw XML of the document being processed. This enables the use of the XML content in a custom Pipeline extensibility stage for further processing.

    Example

    Say that we have the following XML file to index.

     Wikipedia: Nobel Charitable Trust  http://en.wikipedia.org/wiki/Nobel_Charitable_Trust  The Nobel Charitable Trust (NCT) is a charity set up by members of the Swedish Nobel family, i.e.    Michael Nobel Energy Award	http://en.wikipedia.org/wiki/Nobel_Charitable_Trust#Michael_Nobel_Energy_Award  References	http://en.wikipedia.org/wiki/Nobel_Charitable_Trust#References        ...      

    When running the connector the first time; we see the following Crawled Properties discovered in the Custom XML Connector Crawled Properties Category.

    Full Crawl vs. Incremental Crawl

    The BCS Search Connector Framework is implemented in such a way that keeps track of all crawled content in the Crawl Log Database. For each search Content Source, a log of all document ids that have been crawled is stored. This log is used when running subsequent crawls of the content source, be it either a full or an incremental crawl.

    When running an incremental crawl, the BCS framework compares the list of document ids it received from the connector against the list of ids stored in the crawl log database. If there are any document ids within the crawl log database that have not not been received from the connector, the BCS framework assumes that these documents have been deleted, and will attemp to issue deletion operations to the search system. This will cause many inconsistencies, and will make it very difficult to keep both  the actual dataset and the BCS crawl log in sync.

    So, when running either a Full Crawl or an Incremental Crawl of the Content Source, the full dataset of the XML files must be available for traversal. If there are any items missing in subsequent crawls, the SharePoint crawler will consider those as subject for deletion, and og ahead and delete those from the search index.

    One possible work around to tackle this limitation and try to avoid (re)-generating the full data set each time something minor changes, would be to split the XML content into files of different known update frequences, where content that is known to have higher update rates is placed in separate input folders with separate configured Conetent Sources within the FAST Content SSA.

    Optimizations and considerations when crawling large XML files

    When the XML File Connector starts crawling content, it will load and parse found XML files one at the time. So, for each XML file found in the input directory, the whole XML file is read into memory and cached for all subsequent operations by the crawler until all items found in the XML file have been submitted to the indexing subsystem. In that case, the memory cache is cleared, and the next file is loaded and parsed until all files have been processed.

    For the reason just described, it is recommended not to have large single XML files, but split the content across multiple XML files, each consisting of a number of items the is reasonable and can be easily parsed and cached in memory.

    Contact me at tomas.floyd to find out more about this Connector and other custom developed SharePoint and Office 365 Web Parts and Apps!!

    Using OpenXML to build Office 365 Apps (OOXML)

    If you’re building apps for Office to run in Word, you might already know that the JavaScript API for Office (Office.js) offers several formats for reading and writing document content. These are called coercion types, and they include plain text, tables, HTML, and Office Open XML (OOXML).

    So what are your options when you need to add rich content to a document, such as images, formatted tables, charts, or even just formatted text?

    You can use HTML for inserting some types of rich content, such as pictures. Depending on your scenario, there can be drawbacks to HTML coercion, such as limitations in the formatting and positioning options available to your content.

    Because Office Open XML is the language in which Word documents (such as .docx and .dotx) are written, you can insert virtually any type of content that a user can add to a Word document, with virtually any type of formatting the user can apply. Determining the Office Open XML markup you need to get it done is easier than you might think.

    Note Note

    Office Open XML is also the language behind PowerPoint and Excel (and, as of Office 2013, Visio) documents. However, currently, you can coerce content as Office Open XML only in apps for Office created for Word. For more information about Office Open XML, including the complete language reference documentation, see Additional resources.

    To begin, take a look at some of the content types you can insert using OOXML coercion.

    Download the code sample Loading and Writing Office Open XML, which contains the Office Open XML markup and Office.js code required for inserting any of the following examples into Word.

    Note Note

    Throughout this article, the terms content types and rich content refer to the types of rich content you can insert into a Word document.

    Figure 1. Text with direct formatting.

    Text with direct formatting applied.

    You can use direct formatting to specify exactly what the text will look like regardless of existing formatting in the user’s document.

    Figure 2. Text formatted using a style.

    Text formatted with paragraph style.

    You can use a style to automatically coordinate the look of text you insert with the user’s document.

    Figure 3. A simple image.

    Image of a logo.

    You can use the same method for inserting any Office-supported image format.

    Figure 4. An image formatted using picture styles and effects.

    Formatted image in Word 2013.

    Adding high quality formatting and effects to your images requires much less markup than you might expect.

    Figure 5. A content control.

    Text within a bound content control.

    You can use content controls with your app to add content at a specified (bound) location rather than at the selection.

    Figure 6. A text box with WordArt formatting.

    Text formatted with WordArt text effects.

    Text effects are available in Word for text inside a text box (as shown here) or for regular body text.

    Figure 7. A shape.

    An Office 2013 drawing shape in Word 2013.

    You can insert built-in or custom drawing shapes, with or without text and formatting effects.

    Figure 8. A table with direct formatting.

    A formatted table in Word 2013.

    You can include text formatting, borders, shading, cell sizing, or any table formatting you need.

    Figure 9. A table formatted using a table style.

    A formatted table in Word 2013.

    You can use built-in or custom table styles just as easily as using a paragraph style for text.

    Figure 10. A SmartArt diagram.

    A dynamic SmartArt diagram in Word 2013.

    Office 2013 offers a wide array of SmartArt diagram layouts (and you can use Office Open XML to create your own).

    Figure 11. A chart.

    A chart in Word 2013.

    You can insert Excel charts as live charts in Word documents, which also means you can use them in your app for Word.

    As you can see by the preceding examples, you can use OOXML coercion to insert essentially any type of content that a user can insert into their own document.

    There are two simple ways to get the Office Open XML markup you need. Either add your rich content to an otherwise blank Word 2013 document and then save the file in Word XML Document format or use a test app with the getSelectedDataAsync method to grab the markup. Both approaches provide essentially the same result.

    Note Note

    An Office Open XML document is actually a compressed package of files that represent the document contents. Saving the file in the Word XML Document format gives you the entire Office Open XML package flattened into one XML file, which is also what you get when using getSelectedDataAsync to retrieve the Office Open XML markup.

    If you save the file to an XML format from Word, note that there are two options under the Save as Type list in the Save As dialog box for .xml format files. Be sure to choose Word XML Document and not the Word 2003 option.

    Download the code sample named Get, Set, and Edit Office Open XML, which you can use as a tool to retrieve and test your markup.

    So is that all there is to it? Well, not quite. Yes, for many scenarios, you could use the full, flattened Office Open XML result you see with either of the preceding methods and it would work. The good news is that you probably don’t need most of that markup.

    If you’re one of the many app developers seeing Office Open XML markup for the first time, trying to make sense of the massive amount of markup you get for the simplest piece of content might seem overwhelming, but it doesn’t have to be.

    In this topic, we’ll use some common scenarios we’ve been hearing from the apps for Office developer community to show you techniques for simplifying Office Open XML for use in your app. We’ll explore the markup for some types of content shown earlier along with the information you need for minimizing the Office Open XML payload. We’ll also look at the code you need for inserting rich content into a document at the active selection and how to use Office Open XML with the bindings object to add or replace content at specified locations.

    When you use getSelectedDataAsync to retrieve the Office Open XML for a selection of content (or when you save the document in Word XML Document format), what you’re getting is not just the markup that describes your selected content; it’s an entire document with many options and settings that you almost certainly don’t need. In fact, if you use that method from a document that contains a task pane app, the markup you get even includes your task pane.

    Even a simple Word document package includes parts for document properties, styles, theme (formatting settings), web settings, fonts, and then some—in addition to parts for the actual content.

    For example, say that you want to insert just a paragraph of text with direct formatting, as shown earlier in Figure 1. When you grab the Office Open XML for the formatted text using using getSelectedDataAsync, you see a large amount of markup. That markup includes a package element that represents an entire document, which contains several parts (commonly referred to as document parts or, in the Office Open XML, as package parts), as you see listed in Figure 13. Each part represents a separate file within the package.

    Tip Tip

    You can edit Office Open XML markup in a text editor like Notepad. If you open it in Visual Studio 2012, you can use Edit >Advanced > Format Document (Ctrl+K, Ctrl+D) to format the package for easier editing. Then you can collapse or expand document parts or sections of them, as shown in Figure 12, to more easily review and edit the content of the Office Open XML package. Each document part begins with a pkg:part tag.

    Figure 12. Collapse and expand package parts for easier editing in Visual Studio 2012.

    Office Open XML code snippet for a package part.

    Figure 13. The parts included in a basic Word Office Open XML document package.

    Office Open XML code snippet for a package part.

    With all that markup, you might be surprised to discover that the only elements you actually need to insert the formatted text example are pieces of the .rels part and the document.xml part.

    Note Note

    The two lines of markup above the package tag (the XML declarations for version and Office program ID) are assumed when you use the OOXML coercion type, so you don’t need to include them. Keep them if you want to open your edited markup as a Word document to test it.

    Several of the other types of content shown at the start of this topic require additional parts as well (beyond those shown in Figure 13), and we’ll address those later in this topic. Meanwhile, since you’ll see most of the parts shown in Figure 13 in the markup for any Word document package, here’s a quick summary of what each of these parts is for and when you need it:

    • Inside the package tag, the first part is the .rels file, which defines relationships between the top-level parts of the package (these are typically the document properties, thumbnail (if any), and main document body). Some of the content in this part is always required in your markup because you need to define the relationship of the main document part (where your content resides) to the document package.

    • The document.xml.rels part defines relationships for additional parts required by the document.xml (main body) part, if any.

    Important note Important

    The .rels files in your package (such as the top-level .rels, document.xml.rels, and others you may see for specific types of content) are an extremely important tool that you can use as a guide for helping you quickly edit down your Office Open XML package. To learn more about how to do this, see Creating your own markup: best practices later in this topic.

    • The document.xml part is the content in the main body of the document. You need elements of this part, of course, since that’s where your content appears. But, you don’t need everything you see in this part. We’ll look at that in more detail later.

    • Many parts are automatically ignored by the Set methods when inserting content into a document using OOXML coercion, so you might as well remove them. These include the theme1.xml file (the document’s formatting theme), the document properties parts (core, app, and thumbnail), and setting files (including settings, webSettings, and fontTable).

    • In the Figure 1 example, text formatting is directly applied (that is, each font and paragraph formatting setting applied individually). But, if you use a style (such as if you want your text to automatically take on the formatting of the Heading 1 style in the destination document) as shown earlier in Figure 2, then you would need part of the styles.xml part as well as a relationship definition for it. For more information, see the topic section Adding objects that use additional Office Open XML parts.

    Let’s take a look at the minimal Office Open XML markup required for the formatted text example shown in Figure 1 and the JavaScript required for inserting it at the active selection in the document.

    Simplified Office Open XML markup

    We’ve edited the Office Open XML example shown here, as described in the preceding section, to leave just required document parts and only required elements within each of those parts. We’ll walk through how to edit the markup yourself (and explain a bit more about the pieces that remain here) in the next section of the topic.

    Copy
    <pkg:package xmlns:pkg="http://schemas.microsoft.com/office/2006/xmlPackage">
      <pkg:part pkg:name="/_rels/.rels" pkg:contentType="application/vnd.openxmlformats-package.relationships+xml" pkg:padding="512">
        <pkg:xmlData>
          <Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
            <Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="word/document.xml"/>
          </Relationships>
        </pkg:xmlData>
      </pkg:part>
      <pkg:part pkg:name="/word/document.xml" pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml">
        <pkg:xmlData>
          <w:document xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" >
            <w:body>
              <w:p>
                <w:pPr>
                  <w:spacing w:before="360" w:after="0" w:line="480" w:lineRule="auto"/>
                  <w:rPr>
                    <w:color w:val="70AD47" w:themeColor="accent6"/>
                    <w:sz w:val="28"/>
                  </w:rPr>
                </w:pPr>
                <w:r>
                  <w:rPr>
                    <w:color w:val="70AD47" w:themeColor="accent6"/>
                    <w:sz w:val="28"/>
                  </w:rPr>
                  <w:t>This text has formatting directly applied to achieve its font size, color, line spacing, and paragraph spacing.</w:t>
                </w:r>
              </w:p>
            </w:body>
          </w:document>
        </pkg:xmlData>
      </pkg:part>
    </pkg:package>
    
    NoteNote

    If you add the markup shown here to an XML file along with the XML declaration tags for version and mso-application at the top of the file (shown in Figure 13), you can open it in Word as a Word document. Or, without those tags, you can still open it using File> Open in Word. You’ll see Compatibility Mode on the title bar in Word 2013, because you removed the settings that tell Word this is a 2013 document. Since you’re adding this markup to an existing Word 2013 document, that won’t affect your content at all.

    JavaScript for using setSelectedDataAsync

    Once you save the preceding Office Open XML as an XML file that’s accessible from your solution, you can use the following function to set the formatted text content in the document using OOXML coercion.

    In this function, notice that all but the last line are used to get your saved markup for use in the setSelectedDataAsync method call at the end of the function. setSelectedDataASync requires only that you specify the content to be inserted and the coercion type.

    Note Note

    Replace yourXMLfilename with the name and path of the XML file as you’ve saved it in your solution. If you’re not sure where to include XML files in your solution or how to reference them in your code, see the Loading and Writing Office Open XML code sample for examples of that and a working example of the markup and JavaScript shown here.

    Copy
    function writeContent() {
        var myOOXMLRequest = new XMLHttpRequest();
        var myXML;
        myOOXMLRequest.open('GET', ‘yourXMLfilename’, false);
        myOOXMLRequest.send();
        if (myOOXMLRequest.status === 200) {
            myXML = myOOXMLRequest.responseText;
        }
        Office.context.document.setSelectedDataAsync(myXML, { coercionType: 'ooxml' });
    }
    

    Let’s take a closer look at the markup you need to insert the preceding formatted text example.

    For this example, start by simply deleting all document parts from the package other than .rels and document.xml. Then, we’ll edit those two required parts to simplify things further.

    Important note Important

    Use the .rels parts as a map to quickly gauge what’s included in the package and determine what parts you can delete completely (that is, any parts not related to or referenced by your content). Remember that every document part must have a relationship defined in the package and those relationships appear in the .rels files. So you should see all of them listed in either .rels, document.xml.rels, or a content-specific .rels file.

    The following markup shows the required .rels part before editing. Since we’re deleting the app and core document property parts, and the thumbnail part, we need to delete those relationships from .rels as well. Notice that this will leave only the relationship (with the relationship ID “rID1” in the following example) for document.xml.

    Copy
      <pkg:part pkg:name="/_rels/.rels" pkg:contentType="application/vnd.openxmlformats-package.relationships+xml" pkg:padding="512">
        <pkg:xmlData>
          <Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
            <Relationship Id="rId3" Type="http://schemas.openxmlformats.org/package/2006/relationships/metadata/core-properties" Target="docProps/core.xml"/>
            <Relationship Id="rId2" Type="http://schemas.openxmlformats.org/package/2006/relationships/metadata/thumbnail" Target="docProps/thumbnail.emf"/>
            <Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="word/document.xml"/>
            <Relationship Id="rId4" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/extended-properties" Target="docProps/app.xml"/>
          </Relationships>
        </pkg:xmlData>
      </pkg:part>
    
    Important noteImportant

    Remove the relationships (that is, the <Relationship…> tag) for any parts that you completely remove from the package. Including a part without a corresponding relationship, or excluding a part and leaving its relationship in the package, will result in an error.

    The following markup shows the document.xml part—which includes our sample formatted text content—before editing.

    Copy
    <pkg:part pkg:name="/word/document.xml" pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml">
        <pkg:xmlData>
          <w:document mc:Ignorable="w14 w15 wp14" xmlns:wpc="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml" xmlns:w15="http://schemas.microsoft.com/office/word/2012/wordml" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:wpi="http://schemas.microsoft.com/office/word/2010/wordprocessingInk" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape">
            <w:body>
              <w:p>
                <w:pPr>
                  <w:spacing w:before="360" w:after="0" w:line="480" w:lineRule="auto"/>
                  <w:rPr>
                    <w:color w:val="70AD47" w:themeColor="accent6"/>
                    <w:sz w:val="28"/>
                  </w:rPr>
                </w:pPr>
                <w:r>
                  <w:rPr>
                    <w:color w:val="70AD47" w:themeColor="accent6"/>
                    <w:sz w:val="28"/>
                  </w:rPr>
                  <w:t>This text has formatting directly applied to achieve its font size, color, line spacing, and paragraph spacing.</w:t>
                </w:r>
                <w:bookmarkStart w:id="0" w:name="_GoBack"/>
                <w:bookmarkEnd w:id="0"/>
              </w:p>
              <w:p/>
              <w:sectPr>
                <w:pgSz w:w="12240" w:h="15840"/>
                <w:pgMar w:top="1440" w:right="1440" w:bottom="1440" w:left="1440" w:header="720" w:footer="720" w:gutter="0"/>
                <w:cols w:space="720"/>
              </w:sectPr>
            </w:body>
          </w:document>
        </pkg:xmlData>
      </pkg:part>
    

    Since document.xml is the primary document part where you place your content, let’s take a quick walk through that part. (Figure 14, which follows this list, provides a visual reference to show how some of the core content and formatting tags explained here relate to what you see in a Word document.)

    • The opening w:document tag includes several namespace (xmlns) listings. Many of those namespaces refer to specific types of content and you only need them if they’re relevant to your content.

      Notice that the prefix for the tags throughout a document part refers back to the namespaces. In this example, the only prefix used in the tags throughout the document.xml part is w:, so the only namespace that we need to leave in the opening w:document tag is xmlns:w.

    TipTip

    If you’re editing your markup in Visual Studio 2012, after you delete namespaces in any part, look through all tags of that part. If you’ve removed a namespace that’s required for your markup, you’ll see a red squiggly underline on the relevant prefix for affected tags. Also note that, if you remove the xmlns:mc namespace, you must also remove the mc:Ignorable attribute that precedes the namespace listings.

    • Inside the opening body tag, you see a paragraph tag (w:p), which includes our sample content for this example.

    • The w:pPr tag includes properties for directly-applied paragraph formatting, such as space before or after the paragraph, paragraph alignment, or indents. (Direct formatting refers to attributes that you apply individually to content rather than as part of a style.) This tag also includes direct font formatting that’s applied to the entire paragraph, in a nested w:rPr (run properties) tag, which contains the font color and size set in our sample.

    NoteNote

    You might notice that font sizes and some other formatting settings in Word Office Open XML markup look like they’re double the actual size. That’s because paragraph and line spacing, as well some section formatting properties shown in the preceding markup, are specified in twips (one-twentieth of a point).

    Depending on the types of content you work with in Office Open XML, you may see several additional units of measure, including English Metric Units (914,400 EMUs to an inch), which are used for some Office Art (drawingML) values and 100,000 times actual value, which is used in both drawingML and PowerPoint markup. PowerPoint also expresses some values as 100 times actual and Excel commonly uses actual values.

    • Within a paragraph, any content with like properties is included in a run (w:r), such as is the case with the sample text. Each time there’s a change in formatting or content type, a new run starts. (That is, if just one word in the sample text was bold, it would be separated into its own run.) In this example, the content includes just the one text run.

      Notice that, because the formatting included in this sample is font formatting (that is, formatting that can be applied to as little as one character), it also appears in the properties for the individual run.

    • Also notice the tags for the hidden “_GoBack” bookmark (w:bookmarkStart and w:bookmarkEnd), which appear in Word 2013 documents by default. You can always delete the start and end tags for the GoBack bookmark from your markup.

    • The last piece of the document body is the w:sectPr tag, or section properties. This tag includes settings such as margins and page orientation. The content you insert using setSelectedDataAsync will take on the active section properties in the destination document by default. So, unless your content includes a section break (in which case you’ll see more than one w:sectPr tag), you can delete this tag.

    Figure 14. How common tags in document.xml relate to the content and layout of a Word document.

    Office Open XML elements in a Word document.

    TipTip

    In markup you create, you might see another attribute in several tags that includes the characters w:rsid, which you don’t see in the examples used in this topic. These are revision identifiers. They’re used in Word for the Combine Documents feature and they’re on by default. You’ll never need them in markup you’re inserting with your app and turning them off makes for much cleaner markup. You can easily remove existing RSID tags or disable the feature (as described in the following procedure) so that they’re not added to your markup for new content.

    Be aware that if you use the co-authoring capabilities in Word (such as the ability to simultaneously edit documents with others), you should enable the feature again when finished generating the markup for your app.

    To turn off RSID attributes in Word for documents you create going forward, do the following:

    1. In Word 2013, choose File and then choose Options.

    2. In the Word Options dialog box, choose Trust Center and then choose Trust Center Settings.

    3. In the Trust Center dialog box, choose Privacy Options and then disable the setting Store Random Number to Improve Combine Accuracy.

    To remove RSID tags from an existing document, try the following shortcut with the document open in Word:

    1. With your insertion point in the main body of the document, press Ctrl+Home to go to the top of the document.

    2. On the keyboard, press Spacebar, Delete, Spacebar. Then, save the document.

    After removing the majority of the markup from this package, we’re left with the minimal markup that needs to be inserted for the sample, as shown in the preceding section.

    Several types of rich content require only the .rels and document.xml components shown in the preceding example, including content controls, Office drawing shapes and text boxes, and tables (unless a style is applied to the table). In fact, you can reuse the same edited package parts and swap out just the <body> content in document.xml for the markup of your content.

    To check out the Office Open XML markup for the examples of each of these content types shown earlier in Figures 5 through 8, explore the Loading and Writing Office Open XML code sample referenced in the Overview section.

    Before we move on, let’s take a look at differences to note for a couple of these content types and how to swap out the pieces you need.

    Understanding drawingML markup (Office graphics) in Word: What are fallbacks?

    If the markup for your shape or text box looks far more complex than you would expect, there is a reason for it. With the release of Office 2007, we saw the introduction of the Office Open XML Formats as well as the introduction of a new Office graphics engine that PowerPoint and Excel fully adopted. In the 2007 release, Word only incorporated part of that graphics engine—adopting the updated Excel charting engine, SmartArt graphics, and advanced picture tools. For shapes and text boxes, Word 2007 continued to use legacy drawing objects (VML). It was in the 2010 release that Word took the additional steps with the graphics engine to incorporate updated shapes and drawing tools.

    So, to support shapes and text boxes in Office Open XML Format Word documents when opened in Word 2007, shapes (including text boxes) require fallback VML markup.

    Typically, as you see for the shape and text box examples included in the Loading and Writing Office Open XML code sample, the fallback markup can be removed. Word 2013 automatically adds missing fallback markup to shapes when a document is saved. However, if you prefer to keep the fallback markup to ensure that you’re supporting all user scenarios, there’s no harm in retaining it.

    Note also that, if you have grouped drawing objects included in your content, you’ll see additional (and apparently repetitive) markup, but this must be retained. Portions of the markup for drawing shapes are duplicated when the object is included in a group.

    Important note Important

    When working with text boxes and drawing shapes, be sure to check namespaces carefully before removing them from document.xml. (Or, if you’re reusing markup from another object type, be sure to add back any required namespaces you might have previously removed from document.xml.) A substantial portion of the namespaces included by default in document.xml are there for drawing object requirements.

    Note about graphic positioning

    In the code samples Loading and Writing Office Open XMLand Get, Set, and Edit Office Open XML, the text box and shape are setup using different types of text wrapping and positioning settings. (Also be aware that the image examples in those code samples are setup using in line with text formatting, which positions a graphic object on the text baseline.)

    The shape in those code samples is positioned relative to the right and bottom page margins. Relative positioning lets you more easily coordinate with a user’s unknown document setup because it will adjust to the user’s margins and run less risk of looking awkward because of paper size, orientation, or margin settings. To retain relative positioning settings when you insert a graphic object, you must retain the paragraph mark (w:p) in which the positioning (known in Word as an anchor) is stored. If you insert the content into an existing paragraph mark rather than including your own, you may be able to retain the same initial visual, but many types of relative references that enable the positioning to automatically adjust to the user’s layout may be lost.

    Working with content controls

    Content controls are an important feature in Word 2013 that can greatly enhance the power of your app for Word in multiple ways, including giving you the ability to insert content at designated places in the document rather than only at the selection.

    In Word, find content controls on the Developer tab of the ribbon, as shown here in Figure 15.

    Figure 15. The Controls group on the Developer tab in Word.

    Content Controls group on the Word 2013 ribbon.

    Types of content controls in Word include rich text, plain text, picture, building block gallery, check box, dropdown list, combo box, date picker, and repeating section.

    • Use the Properties command, shown in Figure 15, to edit the title of the control and to set preferences such as hiding the control container.

    • Enable Design Mode to edit placeholder content in the control.

    If your app works with a Word template, you can include controls in that template to enhance the behavior of the content. You can also use XML data binding in a Word document to bind content controls to data, such as document properties, for easy form completion or similar tasks. (Find controls that are already bound to built-in document properties in Word on the Insert tab, under Quick Parts.)

    When you use content controls with your app, you can also greatly expand the options for what your app can do using a different type of binding. You can bind to a content control from within the app and then write content to the binding rather than to the active selection.

    Note Note

    Don’t confuse XML data binding in Word with the ability to bind to a control via your app. These are completely separate features. However, you can include named content controls in the content you insert via your app using OOXML coercion and then use code in the app to bind to those controls.

    Also be aware that both XML data binding and Office.js can interact with custom XML parts in your app—so it is possible to integrate these powerful tools. To learn about working with custom XML parts in the Office JavaScript API, see the Additional resources section of this topic.

    Working with bindings in your Word app is covered in the next section of the topic. First, let’s take a look at an example of the Office Open XML required for inserting a rich text content control that you can bind to using your app.

    Important note Important

    Rich text controls are the only type of content control you can use to bind to a content control from within your app.

    Copy
    <pkg:package xmlns:pkg="http://schemas.microsoft.com/office/2006/xmlPackage">
      <pkg:part pkg:name="/_rels/.rels" pkg:contentType="application/vnd.openxmlformats-package.relationships+xml" pkg:padding="512">
        <pkg:xmlData>
          <Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
            <Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="word/document.xml"/>
          </Relationships>
        </pkg:xmlData>
      </pkg:part>
      <pkg:part pkg:name="/word/document.xml" pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml">
        <pkg:xmlData>
          <w:document xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w15="http://schemas.microsoft.com/office/word/2012/wordml" >
            <w:body>
              <w:p/>
              <w:sdt>
                  <w:sdtPr>
                    <w:alias w:val="MyContentControlTitle"/>
                    <w:id w:val="1382295294"/>
                    <w15:appearance w15:val="hidden"/>
                    <w:showingPlcHdr/>
                  </w:sdtPr>
                  <w:sdtContent>
                    <w:p>
                      <w:r>
                      <w:t>[This text is inside a content control that has its container hidden. You can bind to a content control to add or interact with content at a specified location in the document.]</w:t>
                    </w:r>
                    </w:p>
                  </w:sdtContent>
                </w:sdt>
              </w:body>
          </w:document>
        </pkg:xmlData>
      </pkg:part>
     </pkg:package>
    

    As already mentioned, content controls—like formatted text—don’t require additional document parts, so only edited versions of the .rels and document.xml parts are included here.

    The w:sdt tag that you see within the document.xml body represents the content control. If you generate the Office Open XML markup for a content control, you’ll see that several attributes have been removed from this example, including the tag and document part properties. Only essential (and a couple of best practice) elements have been retained, including the following:

    • The alias is the title property from the Content Control Properties dialog box in Word. This is a required property (representing the name of the item) if you plan to bind to the control from within your app.

    • The unique id is a required property. If you bind to the control from within your app, the ID is the property the binding uses in the document to identify the applicable named content control.

    • The appearance attribute is used to hide the control container, for a cleaner look. This is a new feature in Word 2013, as you see by the use of the w15 namespace. Because this property is used, the w15 namespace is retained at the start of the document.xml part.

    • The showingPlcHdr attribute is an optional setting that sets the default content you include inside the control (text in this example) as placeholder content. So, if the user clicks or taps in the control area, the entire content is selected rather than behaving like editable content in which the user can make changes.

    • Although the empty paragraph mark (<w:p/>) that precedes the sdt tag is not required for adding a content control (and will add vertical space above the control in the Word document), it ensures that the control is placed in its own paragraph. This may be important, depending upon the type and formatting of content that will be added in the control.

    • If you intend to bind to the control, the default content for the control (what’s inside the sdtContent tag) must include at least one complete paragraph (as in this example), in order for your binding to accept multi-paragraph rich content.

    NoteNote

    The document part attribute that was removed from this sample w:sdt tag may appear in a content control to reference a separate part in the package where placeholder content information can be stored (parts located in a glossary directory in the Office Open XML package). Although document part is the term used for XML parts (that is, files) within an Office Open XML package, the term document parts as used in the sdt property refers to the same term in Word that is used to describe some content types including building blocks and document property quick parts (for example, built-in XML data-bound controls). If you see parts under a glossary directory in your Office Open XML package, you may need to retain them if the content you’re inserting includes these features. For a typical content control that you intend to use to bind to from your app, they’re not required. Just remember that, if you do delete the glossary parts from the package, you must also remove the document part attribute from the w:sdt tag.

    The next section will discuss how to create and use bindings in your Word app.

    We’ve already looked at how to insert content at the active selection in a Word document. If you bind to a named content control that’s in the document, you can insert any of the same content types into that control.

    So when might you want to use this approach?

    • When you need to add or replace content at specified locations in a template, such as to populate portions of the document from a database

    • When you want the option to replace content that you’re inserting at the active selection, such as to provide design element options to the user

    • When you want the user to add data in the document that you can access for use with your app, such as to populate fields in the task pane based upon information the user adds in the document

    Download the code sample Add and Populate a Binding in Word , which provides a working example of how to insert and bind to a content control, and how to populate the binding.

    Add and bind to a named content control

    As you examine the JavaScript that follows, consider these requirements:

    • As previously mentioned, you must use a rich text content control in order to bind to the control from your Word app.

    • The content control must have a name (this is the Title field in the Content Control Properties dialog box, which corresponds to the Alias tag in the Office Open XML markup). This is how the code identifies where to place the binding.

    • You can have several named controls and bind to them as needed. Use a unique content control name, unique content control ID, and a unique binding ID.

    Copy
    function addAndBindControl() {
            Office.context.document.bindings.addFromNamedItemAsync("MyContentControlTitle", "text", { id: 'myBinding' }, function (result) {
                if (result.status == "failed") {
                    if (result.error.message == "The named item does not exist.")
                        var myOOXMLRequest = new XMLHttpRequest();
                        var myXML;
                        myOOXMLRequest.open('GET', '../../Snippets_BindAndPopulate/ContentControl.xml', false);
                        myOOXMLRequest.send();
                        if (myOOXMLRequest.status === 200) {
                            myXML = myOOXMLRequest.responseText;
                        }
                        Office.context.document.setSelectedDataAsync(myXML, { coercionType: 'ooxml' }, function (result) {
                            Office.context.document.bindings.addFromNamedItemAsync("MyContentControlTitle", "text", { id: 'myBinding' });
                        });
                }
                });
            }
    

    The code shown here takes the following steps:

    • Attempts to bind to the named content control, using addFromNamedItemAsync.

      Take this step first if there is a possible scenario for your app where the named control could already exist in the document when the code executes. For example, you’ll want to do this if the app was inserted into and saved with a template that’s been designed to work with the app, where the control was placed in advance. You also need to do this if you need to bind to a control that was placed earlier by the app.

    • The callback in the first call to the addFromNamedItemAsync method checks the status of the result to see if the binding failed because the named item doesn’t exist in the document (that is, the content control named MyContentControlTitle in this example). If so, the code adds the control at the active selection point (using setSelectedDataAsync) and then binds to it.

    NoteNote

    As mentioned earlier and shown in the preceding code, the name of the content control is used to determine where to create the binding. However, in the Office Open XML markup, the code adds the binding to the document using both the name and the ID attribute of the content control.

    After code execution, if you examine the markup of the document in which your app created bindings, you’ll see two parts to each binding. In the markup for the content control where a binding was added (in document.xml), you’ll see the attribute <w15:webExtensionLinked/>.

    In the document part named webExtensions1.xml, you’ll see a list of the bindings you’ve created. Each is identified using the binding ID and the ID attribute of the applicable control, such as the following—where the appref attribute is the content control ID:<we:binding id=”myBinding” type=”text” appref=”1382295294″/>.

    Important noteImportant

    You must add the binding at the time you intend to act upon it. Don’t include the markup for the binding in the Office Open XML for inserting the content control because the process of inserting that markup will strip the binding.

    Populate a binding

    The code for writing content to a binding is similar to that for writing content to a selection.

    Copy
    function populateBinding(filename) {
            var myOOXMLRequest = new XMLHttpRequest();
            var myXML;
            myOOXMLRequest.open('GET', filename, false);
                myOOXMLRequest.send();
                if (myOOXMLRequest.status === 200) {
                    myXML = myOOXMLRequest.responseText;
                }
                Office.select("bindings#myBinding").setDataAsync(myXML, { coercionType: 'ooxml' });
            }
    

    As with setSelectedDataAsync, you specify the content to be inserted and the coercion type. The only additional requirement for writing to a binding is to identify the binding by ID. Notice how the binding ID used in this code (bindings#myBinding) corresponds to the binding ID established (myBinding) when the binding was created in the previous function.

    NoteNote

    The preceding code is all you need whether you are initially populating or replacing the content in a binding. When you insert a new piece of content at a bound location, the existing content in that binding is automatically replaced. Check out an example of this in the previously-referenced code sample Add and Populate a Binding in Word, which provides two separate content samples that you can use interchangeably to populate the same binding.

    Many types of content require additional document parts in the Office Open XML package, meaning that they either reference information in another part or the content itself is stored in one or more additional parts and referenced in document.xml.

    For example, consider the following:

    • Content that uses styles for formatting (such as the styled text shown earlier in Figure 2 or the styled table shown in Figure 9) requires the styles.xml part.

    • Images (such as those shown in Figures 3 and 4) include the binary image data in one (and sometimes two) additional parts.

    • SmartArt diagrams (such as the one shown in Figure 10) require multiple additional parts to describe the layout and content.

    • Charts (such as the one shown in Figure 11) require multiple additional parts, including their own relationship (.rels) part.

    You can see edited examples of the markup for all of these content types in the previously-referenced code sample Loading and Writing Office Open XML. You can insert all of these content types using the same JavaScript code shown earlier (and provided in the referenced code samples) for inserting content at the active selection and writing content to a specified location using bindings.

    Before you explore the samples, let’s take a look at few tips for working with each of these content types.

    Important note Important

    Remember, if you are retaining any additional parts referenced in document.xml, you will need to retain document.xml.rels and the relationship definitions for the applicable parts you’re keeping, such as styles.xml or an image file.

    Working with styles

    The same approach to editing the markup that we looked at for the preceding example with directly-formatted text applies when using paragraph styles or table styles to format your content. However, the markup for working with paragraph styles is considerably simpler, so that is the example described here.

    Editing the markup for content using paragraph styles

    The following markup represents the body content for the styled text example shown in Figure 2.

    Copy
    <w:body>
      <w:p>
        <w:pPr>
          <w:pStyle w:val="Heading1"/>
        </w:pPr>
        <w:r>
          <w:t>This text is formatted using the Heading 1 paragraph style.</w:t>
        </w:r>
      </w:p>
    </w:body>
    
    NoteNote

    As you see, the markup for formatted text in document.xml is considerably simpler when you use a style, because the style contains all of the paragraph and font formatting that you otherwise need to reference individually. However, as explained earlier, you might want to use styles or direct formatting for different purposes: use direct formatting to specify the appearance of your text regardless of the formatting in the user’s document; use a paragraph style (particularly a built-in paragraph style name, such as Heading 1 shown here) to have the text formatting automatically coordinate with the user’s document.

    Use of a style is a good example of how important it is to read and understand the markup for the content you’re inserting, because it’s not explicit that another document part is referenced here. If you include the style definition in this markup and don’t include the styles.xml part, the style information in document.xml will be ignored regardless of whether or not that style is in use in the user’s document.

    However, if you take a look at the styles.xml part, you’ll see that only a small portion of this long piece of markup is required when editing markup for use in your app:

    • The styles.xml part includes several namespaces by default. If you are only retaining the required style information for your content, in most cases you only need to keep the xmlns:w namespace.

    • The w:docDefaults tag content that falls at the top of the styles part will be ignored when your markup is inserted via the app and can be removed.

    • The largest piece of markup in a styles.xml part is for the w:latentStyles tag that appears after docDefaults, which provides information (such as appearance attributes for the Styles pane and Styles gallery) for every available style. This information is also ignored when inserting content via your app and so it can be removed.

    • Following the latent styles information, you see a definition for each style in use in the document from which you’re markup was generated. This includes some default styles that are in use when you create a new document and may not be relevant to your content. You can delete the definitions for any styles that aren’t used by your content.

    NoteNote

    Each built-in heading style has an associated Char style that is a character style version of the same heading format. Unless you’ve applied the heading style as a character style, you can remove it. If the style is used as a character style, it appears in document.xml in a run properties tag (w:rPr) rather than a paragraph properties (w:pPr) tag. This should only be the case if you’ve applied the style to just part of a paragraph, but it can occur inadvertently if the style was incorrectly applied.

    • If you’re using a built-in style for your content, you don’t have to include a full definition. You only must include the style name, style ID, and at least one formatting attribute in order for the coerced OOXML to apply the style to your content upon insertion.

      However, it’s a best practice to include a complete style definition (even if it’s the default for built-in styles). If a style is already in use in the destination document, your content will take on the resident definition for the style, regardless of what you include in styles.xml. If the style isn’t yet in use in the destination document, your content will use the style definition you provide in the markup.

    So, for example, the only content we needed to retain from the styles.xml part for the sample text shown in Figure 2, which is formatted using Heading 1 style, is the following.

    NoteNote

    A complete Word 2013 definition for the Heading 1 style has been retained in this example.

    Copy
    <pkg:part pkg:name="/word/styles.xml" pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.styles+xml">
      <pkg:xmlData>
        <w:styles xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" >
          <w:style w:type="paragraph" w:styleId="Heading1">
            <w:name w:val="heading 1"/>
            <w:basedOn w:val="Normal"/>
            <w:next w:val="Normal"/>
            <w:link w:val="Heading1Char"/>
            <w:uiPriority w:val="9"/>
            <w:qFormat/>
            <w:pPr>
              <w:keepNext/>
              <w:keepLines/>
              <w:spacing w:before="240" w:after="0" w:line="259" w:lineRule="auto"/>
              <w:outlineLvl w:val="0"/>
            </w:pPr>
            <w:rPr>
              <w:rFonts w:asciiTheme="majorHAnsi" w:eastAsiaTheme="majorEastAsia" w:hAnsiTheme="majorHAnsi" w:cstheme="majorBidi"/>
              <w:color w:val="2E74B5" w:themeColor="accent1" w:themeShade="BF"/>
              <w:sz w:val="32"/>
              <w:szCs w:val="32"/>
            </w:rPr>
          </w:style>
        </w:styles>
      </pkg:xmlData>
    </pkg:part>
    

    Editing the markup for content using table styles

    When your content uses a table style, you need the same relative part of styles.xml as described for working with paragraph styles. That is, you only need to retain the information for the style you’re using in your content—and you must include the name, ID, and at least one formatting attribute—but are better off including a complete style definition to address all potential user scenarios.

    However, when you look at the markup both for your table in document.xml and for your table style definition in styles.xml, you see enormously more markup than when working with paragraph styles.

    • In document.xml, formatting is applied by cell even if it’s included in a style. Using a table style won’t reduce the volume of markup. The benefit of using table styles for the content is for easy updating and easily coordinating the look of multiple tables.

    • In styles.xml, you’ll see a substantial amount of markup for a single table style as well, because table styles include several types of possible formatting attributes for each of several table areas, such as the entire table, heading rows, odd and even banded rows and columns (separately), the first column, etc.

    Working with images

    The markup for an image includes a reference to at least one part that includes the binary data to describe your image. For a complex image, this can be hundreds of pages of markup and you can’t edit it. Since you don’t ever have to touch the binary part(s), you can simply collapse it if you’re using a structured editor such as Visual Studio 2012, so that you can still easily review and edit the rest of the package.

    If you check out the example markup for the simple image shown earlier in Figure 3, available in the previously-referenced code sample Loading and Writing Office Open XML, you’ll see that the markup for the image in document.xml includes size and position information as well as a relationship reference to the part that contains the binary image data. That reference is included in the <a:blip> tag, as follows:

    Copy
    <a:blip r:embed="rId4" cstate="print">
    

    Be aware that, because a relationship reference is explicitly used (r:embed=”rID4″) and that related part is required in order to render the image, if you don’t include the binary data in your Office Open XML package, you will get an error. This is different from styles.xml, explained previously, which won’t throw an error if omitted since the relationship is not explicitly referenced and the relationship is to a part that provides attributes to the content (formatting) rather than being part of the content itself.

    NoteNote

    When you review the markup, notice the additional namespaces used in the a:blip tag. You’ll see in document.xml that the xlmns:a namespace (the main drawingML namespace) is dynamically placed at the beginning of the use of drawingML references rather than at the top of the document.xml part. However, the relationships namespace (r) must be retained where it appears at the start of document.xml. Check your picture markup for additional namespace requirements. Remember that you don’t have to memorize which types of content require what namespaces—you can easily tell by reviewing the prefixes of the tags throughout document.xml.

    Understanding additional image parts and formatting

    When you use some Office picture formatting effects on your image—such as for the image shown in Figure 4, which uses adjusted brightness and contrast settings (in addition to picture styling)—a second binary data part for an HD format copy of the image data may be required. This additional HD format is required for formatting considered a layering effect, and the reference to it appears in document.xml similar to the following:

    Copy
    <a14:imgLayer r:embed="rId5">
    

    See the required markup for the formatted image shown in Figure 4 (which uses layering effects among others) in the Loading and Writing Office Open XML code sample.

    Working with SmartArt diagrams

    A SmartArt diagram has four associated parts, but only two are always required. You can examine an example of SmartArt markup in the Loading and Writing Office Open XML code sample. First, take a look at a brief description of each of the parts and why they are or are not required:

    Note Note

    If your content includes more than one diagram, they will be numbered consecutively, replacing the 1 in the file names listed here.

    • layout1.xml: This part is required. It includes the markup definition for the layout appearance and functionality.

    • data1.xml: This part is required. It includes the data in use in your instance of the diagram.

    • drawing1.xml: This part is not always required but if you apply custom formatting to elements in your instance of a diagram—such as directly formatting individual shapes—you might need to retain it.

    • colors1.xml: This part is not required. It includes color style information, but the colors of your diagram will coordinate by default with the colors of the active formatting theme in the destination document, based on the SmartArt color style you apply from the SmartArt Tools design tab in Word before saving out your Office Open XML markup.

    • quickStyles1.xml: This part is not required. Similar to the colors part, you can remove this as your diagram will take on the definition of the applied SmartArt style that’s available in the destination document (that is, it will automatically coordinate with the formatting theme in the destination document).

    Tip Tip

    The SmartArt layout1.xml file is a good example of places you may be able to further trim your markup but might not be worth the extra time to do so (because it removes such a small amount of markup relative to the entire package). If you would like to get rid of every last line you can of markup, you can delete the <dgm:sampData…> tag and its contents. This sample data defines how the thumbnail preview for the diagram will appear in the SmartArt styles galleries. However, if it’s omitted, default sample data is used.

    Be aware that the markup for a SmartArt diagram in document.xml contains relationship ID references to the layout, data, colors, and quick styles parts. You can delete the references in document.xml to the colors and styles parts when you delete those parts and their relationship definitions (and it’s certainly a best practice to do so, since you’re deleting those relationships), but you won’t get an error if you leave them, since they aren’t required for your diagram to be inserted into a document. Find these references in document.xml in the dgm:relIds tag. Regardless of whether or not you take this step, retain the relationship ID references for the required layout and data parts.

    Working with charts

    Similar to SmartArt diagrams, charts contain several additional parts. However, the setup for charts is a bit different from SmartArt, in that a chart has its own relationship file. Following is a description of required and removable document parts for a chart:

    Note Note

    As with SmartArt diagrams, if your content includes more than one chart, they will be numbered consecutively, replacing the 1 in the file names listed here.

    • In document.xml.rels, you’ll see a reference to the required part that contains the data that describes the chart (chart1.xml).

    • You also see a separate relationship file for each chart in your Office Open XML package, such as chart1.xml.rels.

      There are three files referenced in chart1.xml.rels, but only one is required. These include the binary Excel workbook data (required) and the color and style parts (colors1.xml and styles1.xml) that you can remove.

    Charts that you can create and edit natively in Word 2013 are Excel 2013 charts, and their data is maintained on an Excel worksheet that’s embedded as binary data in your Office Open XML package. Like the binary data parts for images, this Excel binary data is required, but there’s nothing to edit in this part. So you can just collapse the part in the editor to avoid having to manually scroll through it all to examine the rest of your Office Open XML package.

    However, similar to SmartArt, you can delete the colors and styles parts. If you’ve used the chart styles and color styles available in to format your chart, the chart will take on the applicable formatting automatically when it is inserted into the destination document.

    See the edited markup for the example chart shown in Figure 11 in the Loading and Writing Office Open XML code sample.

    You’ve already seen how to identify and edit the content in your markup. If the task still seems difficult when you take a look at the massive Open XML package generated for your document, following is a quick summary of recommended steps to help you edit that package down quickly:

    Note Note

    Remember that you can use all .rels parts in the package as a map to quickly check for document parts that you can remove.

    1. Open the flattened XML file in Visual Studio 2012 and press Ctrl+K, Ctrl+D to format the file. Then use the collapse/expand buttons on the left to collapse the parts you know you need to remove. You might also want to collapse long parts you need, but know you won’t need to edit (such as the base64 binary data for an image file), making the markup faster and easier to visually scan.

    2. There are several parts of the document package that you can almost always remove when you are preparing Open XML markup for use in your app. You might want to start by removing these (and their associated relationship definitions), which will greatly reduce the package right away. These include the theme1, fontTable, settings, webSettings, thumbnail, both the core and app properties files, and any taskpane or webExtension parts.

    3. Remove any parts that don’t relate to your content, such as footnotes, headers, or footers that you don’t require. Again, remember to also delete their associated relationships.

    4. Review the document.xml.rels part to see if any files referenced in that part are required for your content, such as an image file, the styles part, or SmartArt diagram parts. Delete the relationships for any parts your content doesn’t require and confirm that you have also deleted the associated part. If your content doesn’t require any of the document parts referenced in document.xml.rels, you can delete that file also.

    5. If your content has an additional .rels part (such as chart#.xml.rels), review it to see if there are other parts referenced there that you can remove (such as quick styles for charts) and delete both the relationship from that file as well as the associated part.

    6. Edit document.xml to remove namespaces not referenced in the part, section properties if your content doesn’t include a section break, and any markup that’s not related to the content that you want to insert. If inserting shapes or text boxes, you might also want to remove extensive fallback markup.

    7. Edit any additional required parts where you know that you can remove substantial markup without affecting your content, such as the styles part.

    After you’ve taken the preceding seven steps, you’ve likely cut between about 90 and 100 percent of the markup you can remove, depending on your content. In most cases, this is likely to be as far as you want to trim.

    Regardless of whether you leave it here or choose to delve further into your content to find every last line of markup you can cut, remember that you can use the previously-referenced code sample Get, Set, and Edit Office Open XML as a scratch pad to quickly and easily test your edited markup.

    Tip Tip

    If you update an OOXML snippet in an existing solution while developing, clear temporary Internet files before you run the solution again to update the Open XML used by your code. Markup that’s included in your solution in XML files is cached on your computer.

    You can, of course, clear temporary Internet files from your default web browser. To access Internet options and delete these settings from inside Visual Studio 2012, on the Debug menu, choose Options and Settings. Then, under Environment, choose Web Browser and then choose Internet Explorer Options.

    In this topic, you’ve seen several examples of what you can do with Open XML in your apps for . We’ve looked at a wide range of rich content type examples that you can insert into documents by using the OOXML coercion type, together with the JavaScript methods for inserting that content at the selection or to a specified (bound) location.

    So, what else do you need to know if you’re creating your app both for stand-alone use (that is, inserted from the Store or a proprietary server location) and for use in a pre-created template that’s designed to work with your app? The answer might be that you already know all you need.

    The markup for a given content type and methods for inserting it are the same whether your app is designed to stand-alone or work with a template. If you are using templates designed to work with your app, just be sure that your JavaScript includes callbacks that account for scenarios where referenced content might already exist in the document (such as demonstrated in the binding example shown in the section Add and bind to a named content control).

    When using templates with your app—whether the app will be resident in the template at the time that the user created the document or the app will be inserting a template—you might also want to incorporate other elements of the API to help you create a more robust, interactive experience. For example, you may want to include identifying data in a customXML part that you can use to determine the template type in order to provide template-specific options to the user. To learn more about how to work with customXML in your apps, see the additional resources that follow.

    For more information, see the following resources:

    Troubleshooting issues with the “Eligible MSDN Subscriber” license type

    As you adopt Visual Studio Online (VSO) and assign licenses to your users you may want to assign the “Eligible MSDN Subscriber” license type to team members. MSDN subscriptions are purchased outside of VSO and assigned to individual users. Before an MSDN subscriber can log in to VSO as an eligible MSDN subscriber, the subscription process must first be completed. The general flow looks like this:

    Image

    • MSDN subscription is purchase by or assigned to a team member. Assignment could be via the Volume Licensing Service Center, MPN, etc.
    • Team member receives an email asking them to activate their subscription by signing in with a Microsoft account and registering with the subscription details provided.
    • Team member activates the subscription and associates a Microsoft account (MSA) with the subscription. The team member logs in to msdn.microsoft.com/en-us/subscriptions/manage using this MSA to manage the subscription going forward.
    • A VSO admin adds the MSA (the one the team member associated with their subscription in the previous step) to the Users Hub (https://Contoso.visualstudio.com/_user) and assigns them an “Eligible MSDN Subscriber” license.
    • If 24-48 hours have passed since the subscription was activated, the team member should be able to log in to VSO as an “Eligible MSDN Subscriber” and enjoy full access to all VSO features (a benefit of this license level).

     

    Sometimes though the team member may run into problems after this process. Specifically, they may try to log in and receive an error: 

     

    Figure 1: (403 Forbidden \ …does not have license rights to access this account \ VSS012019: No MSDN subscription found for this user)

     

    The team member could be seeing this error for a number of reasons: 

    1. The specific MSDN subscription may not be eligible for Visual Studio Online access
    Not all MSDN subscription types include VSO as a benefit. Please refer to the list of eligible MSDN subscriptions to verify.

     

    2. The MSDN subscription was assigned to the user within the last 48 hours
    There is a delay between MSDN subscription assignments and when VSO will see it \ allow the user to log in. If the subscription was assigned less than 24 hours ago please wait 24-48 hours and have the user try again.

     

    3. The “Software Downloads” benefit has not been provided with the MSDN subscription
    VSO usage is tied to the “Software Downloads” benefit of an MSDN subscription. In order to access VSO when assigned the “Eligible MSDN Subscriber” license type, the user’s MSDN subscription must include this “Software Downloads” benefit.  Here’s an example of enabling that benefit via the Volume Licensing Service Center:

     

     

    4. The wrong email address has been added to the VSO Users Hub
    This one is important. When an MSDN subscription is assigned to a user they receive an email (usually to their work email address) asking them to associate their new MSDN subscription with a Microsoft account\email address (MSA). This could be the same address where they received the mail or the user could pick a completely different MSA. The MSA they pick to associate with their new MSDN subscription is the one that needs to be added to the VSO Users Hub (https://Contoso.visualstudio.com/_user) and assigned the “Eligible MSDN Subscriber” license. It’s the one they use to manage their MSDN subscription via https://msdn.microsoft.com/subscriptions/manage.

    Check with your user to see what MSA they’ve associated with & use to manage their MSDN subscription, and ensure that’s the one on the Users Hub.

     

    If all this checks out, have the team member try this:

    1. Go to msdn.microsoft.com/en-us/subscriptions/manage and log in with your Microsoft account
    2. From the “My Subscriptions” section in the top-left, copy the name, email and Subscriber ID for the MSDN subscription you want to use with VSO. Paste this into Notepad or other text editor.
    3. Click “Add an existing subscription to my account” in the “My Subscriptions” section in the top-left.
    4. Fill out the form with the values you copied in step # 3 and then click NEXT.
    5. Acknowledge and accept the license notice and then click ACCEPT.

    This will not change your subscription in any way but will essentially reactivate it and with any luck this will allow you to log in to VSO.

     

    I hope you are not having any issues with the “Eligible MSDN Subscriber” license type in VSO, but if you are please run through this checklist to try and fix them. If you are still blocked, please open support case and we can assist:

     

    New “Filter My ListView” SharePoint Web Part and App now available for SP 2010 & 2013 On-premise and Office 365!!

    What is it?

    The “Filter My ListView” Web Part / App is a SharePoint WebPart enables you to create custom filter to find information in SharePoint list or document library.

    my listview

    Why do you need it?

    In working with SharePoint and with large lists or document libraries containing 100K+ items, users frequently found that there is no usable tool for filtering data.

    SharePoint let us create views, but their functionality doesn’t meet the requirements of users. And most popular reason is this: list view is static and users can’t modify it on the fly.

    On the other hand the “Filter My List” web part may filter data representing in the current view’s columns. But user can’t apply multiple filter to list and others (date range, filter criteria, …).

    All this leads to the fact that we have to have custom solution this solving these limitations.

    Usage

    The “Filter My ListView” Web Part / App is a simple to use SharePoint list view filter. It enables your to create custom filter form, composed from all list fields (not only fields containing in current list view).

    Supported field types

    • Simple text

    jQuery UI is used for using autocomplete!

    • Text with options enables select filtering type

    Text with filtering options

    • Date

    • DateRange

    • Boolean

    • DropDown list represents unique values of field

    • User or Group
    • Taxonomy Term Picker

    • Multi-select CheckBoxList

    The “Filter My ListView” Web Part / App builds a filter form using different types of controls:

    • TextBox. “Contains” criteria filter
    • TextBox with autocomplete
    • TextBox with options. Allows user to choose filter criteria that can be one of these:
      • Equals
      • Not equals
      • Contains
      • Begins with
    • Date
    • Date Range
    • DropDownList
    • DropDownList with multiple selection
    • People picker
    • MetaData picker

    Relation between field type and supported filter types is represented in this matrix:

    Contact me now through my blog, https://sharepointsamurai.wordpress.com or at tomas.floyd@outlook.com for this and more SharePoint and Office 365 custom developed Web Parts and Apps

    Example of how to use the SAP NetWeaver Gateway in building a Cloud App

    Overview

    SAP provides a tool called SAP NetWeaver Gateway that enables the ability to expose SAP application data as an OData service. This OData service can then be used by a CBA to create custom line of business apps. SAP has several sample gateway services you can use for testing and app building. For our example, we will use the SAP Enterprise Procurement Model (EPM) service. Read the SAP documentation to learn how to access to the EPM service and other sample services from SAP. Be aware that these sample services are read-only; however, NetWeaver Gateway does support read-write services.

    Our SAP CBA app will be based on a fictional company that sells computers and accessories. This company has several locations worldwide, including a distribution branch that we will be building a line of business app for, named Contoso Shipping Management. Specifically, our app will help the branch manager of Contoso Shipping Management with their daily tasks. The branch manager routinely views product information in the system and adds supplemental production information that is specific to their branch (such as the item location and whether items are out of stock).

    Define the data model

    Begin by creating a CBA app; in Visual Studio. Choose the Cloud Business App project template under the Office/SharePoint>Apps node.

    Attach to SAP Data Source

    When you have created the app, attach it to the SAP service.

    1. In the Server Explorer, under the Server project, choose the Data Sources>Add Data Source.
    2. In the Attach Data Source Wizard, notice the option to select SAP as a data source; after selecting it, choose Next.
      clip_image001 Figure 1. Select SAP in the Attach Data Source Wizard
    3. On the Enter Connection Information page, enter the URL to SAP EPM service along with the credentials that you received after signing up for access to their test feeds; choose Next. Although, it is possible to select None for the authentication type, typically SAP feeds are configured to require authentication (CBA apps currently support connecting to SAP using basic authentication). For more information, see the Authentication section listed at the end of this post.
      clip_image003 Figure 2. Enter connection information in the Attach Data Source Wizard
    4. On the Choose your Entities page, select the BusinessPartner and Product entities and rename the data source to SAP_EPM_Service; choose Finish.
      clip_image005 Figure 3. Select the BusinessPartner and Product entities in the Attach Data Source Wizard

    As a result, you will now see the SAP_EPM_Service added as a data source to your Server project including the BusinessPartner and Product entities that you selected.

    clip_image007 Figure 4. Entity Designer showing the Product entity

    You can use the ctrl + up arrow\down arrow keys to change the order of the properties. It is useful to define the desired order on the entity, so that later when screens are created, the fields on the screen will automatically be added in the same order. For example, you may change the order of the properties so that ProductId, Name, ProductURL, and Description appear first.

    One feature of an SAP data source within a CBA is the recognition of certain SAP-specific annotations that can adorn entity properties within the service. Specifically, the annotations that will be recognized by a CBA are those that have the sap:semantics value set to “email”, “tel”, or “url”. The BusinessPartner entity that was selected in the Attach Data Source Wizard happens to have properties that demonstrate all three of these annotations. You can view the semantic annotations by viewing the $metadata from the SAP feed.

    https://sapes1.sapdevcenter.com/sap/opu/odata/sap/ZGWSAMPLE_SRV/$metadata

    clip_image008

    Viewing the BusinessPartner entity in the Entity Designer, observe that the EmailAddress, PhoneNumber, and WebAddress fields have their respective types set to the Email Address, Phone Number, and Web Address business types.

    clip_image010 Figure 5. Properties on the BusinessPartner entity have been set to the appropriate business type

    Extend the Product Entity Properties

    For our line of business app, we need to track some additional product information that is specific to our branch, Contoso Shipping Management. With a CBA, we can easily extend any entity properties by relating data from the internal database of the app with data from an external data source. Furthermore, CBAs support relating data between external data sources, such as SharePoint\Office 365 and SAP.

    Add a Relationship

    In our example, we would like to track two additional pieces of product information: the item location and whether a product is out of stock. First, we need to add an entity to the internal database of our app that will be used to store this additional information. Second, we will relate this entity to the Product entity (that exists in the SAP data source) using a one-to-one or zero-to-one relationship.

    1. To add an entity in the internal database, choose the Data Sources node in the Solution Explorer and select Add Table.
    2. Rename the table to ProductDetail and add the following properties: OutOfStock and BackroomLocation. Clear the Required check box for these properties.
      clip_image011 Figure 6. The ProductDetail entity
    3. Add a relationship between Product and ProductDetail. To do this, open Product in the entity designer and choose the Add: Relationship button.
      clip_image013 Figure 7. Adding a relationship
    4. In the Add New Relationship dialog box, add a relationship so that each Product can have one ProductDetail (and a ProductDetail must have a Product). Choose OK to close the dialog box.
      clip_image014 Figure 8. Configuring the relationship

    Create the client screens

    Now that the data model is defined, add some screens to the app. While working with the screens, remember that the sample SAP service that we are using is read-only. As a result, the only data that we can edit is the ProductDetail entity because it is stored in the internal database of the app. If instead we were using a read-write SAP service, we would be able to edit all of the information on these screens and automatically save the data back to SAP.

    Create the Common Screen Set

    1. Choose Screens node in the Solution Explorer and select Add Screen.
    2. In the Add New Screen dialog box, select the Common Screen Set template and set the Screen Data to SAP_EPM_Service.ProductCollection. Finally, choose OK to close the dialog box.
      clip_image016 Figure 9. Adding a new Common Screen Set

      As a result, you will now see a ProductCollection folder created in the SolutionExplorer that contains three screens: AddEditProduct, BrowseProductCollection, and ViewProduct.

    3. Since we defined a one-to-one or zero-to-one relationship between Product and ProductDetail, we need to add code that automatically creates a new ProductDetail entity when the AddEditProduct screen is opened. Choose the AddEditProduct screen from the Solution Explorer, choose the Write Code drop-down in the Screen Designer toolbar and choose created.
      clip_image018 Figure 10. Writing “created” code on the AddEditProduct screen

      To create a ProductDetail instance for this Product instance, use the following code.

      myapp.AddEditProduct.created = function (screen) {
        if (!screen.Product.ProductDetail) {
          var productDetail =        myapp.activeDataWorkspace.ApplicationData.ProductDetails.addNew();
          productDetail.Product = screen.Product;
        }
      };

      Now when the AddEditProduct screen is opened, a related ProductDetail is created, if one does not already exist.

    4. To make the fields that were added from the ProductDetail entity more prevalent on the screen, move the Out of Stock and Backroom Location controls to the top of the right Rows Layout group on this screen. Do the same on the ViewProduct screen as well. Notice that you can also change other appearance properties on controls, such as the font. There are many other properties that you can set on the screen designer to customize the screens.
    5. Also on the ViewProduct screen, drag out the Supplier field under our ProductDetail controls to show data from the BusinessPartner entity. This will create a group called Supplier (with properties from the related BusinessPartner entity). For this example, remove all the fields from this group except Email Address, Phone Number, and Web Address. Notice that these controls appear respectively as an Email Viewer, Phone Viewer, and Web Address Viewer (due to the annotations feature described earlier).
      clip_image020 Figure 11. Control layout
    6. Finally, we want to display the Product images on the ViewProduct screen. First change the Product Pic Url control to be an Image control. To do this, choose the ViewProduct screen in the Solution Explorer, then find the Product Pic Url control on the screen and change it from a Text control to an Image control.
      clip_image022 Figure 12. Changing Product Pic Url to an Image control

      Because this SAP service stores the image URLs in a relative format, we need to write more code to set the full URL to the image. With the Product Pic Url control still selected, choose the Edit PostRender Code link in the Properties window.

      clip_image024 Figure 13. Properties of the Product Pic Url control

      Add the following code to the PostRender method.

      myapp.ViewProduct.ProductPicUrl_postRender =  function (element, contentItem) {
        // add the URL of our SAP server to the relative ProductPicUrl
        var totalUri = "https://sapes1.sapdevcenter.com" + contentItem.value;
        $(element).find("img").attr("src", totalUri);
      };

    Run the app

    Now, run the app (press F5).

    If prompted, enter your SharePoint credentials. When the app starts, also choose Trust It (if prompted).

    Notice the following:

    • The app home screen allows you to browse Product data from the attached SAP data source.
    • When you choose a Product, the detailed Product information is displayed—this includes fields from both the attached SAP data source and the fields defined on the intrinsic ProductDetails entity.
      clip_image026 Figure 14. The ViewProduct screen
    • On the ViewProduct screen, you can choose the Edit button to open the AddEditProduct screen. Since the particular SAP service that the app is accessing is a read-only service, the fields defined on the SAP Product entity cannot be updated, but those defined on ProductDetail can be. If your app is accessing a read-only service, it is a good idea to remove the Add button on the ViewProduct screen and make the controls on the AddEditProduct screen “view” controls instead of “edit” controls.
      clip_image028 Figure 15. The AddEditProduct screen

    Additional notes

    Authentication

    SAP can be configured with a variety of authentication providers. For this release, we support HTTP Basic authentication. Basic authentication is enabled on most NetWeaver Gateway installations, and is easy to configure in both test and production. If you find that CBAs aren’t working with your SAP environment, let us know how we can support you better in the future.

    For our debut of SAP support, we wanted to enable a complete read and write scenario with SAP data. Therefore, in addition to basic authentication support, we’ve also implemented the session and CSRF token handling that SAP requires to be able to modify SAP data via the NetWeaver Gateway. This means that your CBAs will be able to write changes back to SAP if your SAP feeds support it. Don’t worry—when you attach to SAP data, we negotiate the SAP sessions and tokens automatically. There is nothing for you to configure.

    Non-addressable entities

    If your data fails to load and you see a diagnostics error similar to the following, it’s because you’re attempting to navigate directly to an SAP entity that is non-addressable.

    Error: The SalesOrderLineItemCollection is not addressable. Please use the Navigation Property via the SalesOrder Collection or Entity.

    For example, in the EPM service, the SalesOrderLineItemCollection entity set is marked as non-addressable which is evident in the service root document (for example, https://sapes1.sapdevcenter.com/sap/opu/odata/sap/ZGWSAMPLE_SRV).

    clip_image030

    This means that SalesOrderLineItemCollection is a child of SalesOrderCollection and that you can only access it by navigating to it through the parent.

    To solve this, be sure that you do not have a Browse Data Screen that is bound directly to a non-addressable entity (for example, SalesOrderLineItem). Instead, bind the Browse Data Screen to the parent (for example, SalesOrder) and create a View Details Screen that displays the SalesOrderLineItems data for the selected SalesOrder. This is done for you if you use the Common Screen Set template and select the parent as the primary entity for the Browse Data Screen.

    Microsoft Patterns and Practices : An overview of the Security Development Lifeycle

    Pre-SDL Requirements: Security Training

    SDL Practice 1: Training Requirements

    All members of a software development team must receive appropriate training to stay informed about security basics and recent trends in security and privacy. Individuals in technical roles (developers, testers, and program managers) that are directly involved with the development of software programs must attend at least one unique security training class each year.

    Basic software security training should cover foundational concepts such as:

    • Secure design, including the following topics:
      • Attack surface reduction
      • Defense in depth
      • Principle of least privilege
      • Secure defaults
      • Threat modeling, including the following topics:
        • Overview of threat modeling
        • Design implications of a threat model
        • Coding constraints based on a threat model
        • Secure coding, including the following topics:
          • Buffer overruns (for applications using C and C++)
          • Integer arithmetic errors (for applications using C and C++)
          • Cross-site scripting (for managed code and Web applications)
          • SQL injection (for managed code and Web applications)
          • Weak cryptography
          • Security testing, including the following topics:
            • Differences between security testing and functional testing
            • Risk assessment
            • Security testing methods
            • Privacy, including the following topics:
              • Types of privacy-sensitive data
              • Privacy design best practices
              • Risk assessment
              • Privacy development best practices
              • Privacy testing best practices

    The preceding training establishes an adequate knowledge baseline for technical personnel. As time and resources permit, training in advanced concepts may be necessary. Examples include, but are not limited to, the following:

    • Advanced security design and architecture
    • Trusted user interface design
    • Security vulnerabilities in detail
    • Implementing custom threat mitigations

    Phase One: Requirements

    SDL Practice 2: Security Requirements

    The need to consider security and privacy “up front” is a fundamental aspect of secure system development. The optimal point to define trustworthiness requirements for a software project is during the initial planning stages. This early definition of requirements allows development teams to identify key milestones and deliverables, and permits the integration of security and privacy in a way that minimizes any disruption to plans and schedules. Security and privacy requirements analysis is performed at project inception and includes specification of minimum security requirements for the application as it is designed to run in its planned operational environment and specification and deployment of a security vulnerability/work item tracking system.

    SDL Practice 3: Quality Gates/Bug Bars

    Quality gates and bug bars are used to establish minimum acceptable levels of security and privacy quality. Defining these criteria at the start of a project improves the understanding of risks associated with security issues and enables teams to identify and fix security bugs during development. A project team must negotiate quality gates (for example, all compiler warnings must be triaged and fixed prior to code check-in) for each development phase, and then have them approved by the security advisor, who may add project-specific clarifications and more stringent security requirements as appropriate. The project team must also illustrate compliance with the negotiated quality gates in order to complete the Final Security Review (FSR).

    A bug bar is a quality gate that applies to the entire software development project. It is used to define the severity thresholds of security vulnerabilities—for example, no known vulnerabilities in the application with a “critical” or “important” rating at time of release. The bug bar, once set, should never be relaxed. A dynamic bug bar is a moving target that is likely to be poorly understood within the development organization.

    SDL Practice 4: Security and Privacy Risk Assessment

    Security risk assessments (SRAs) and privacy risk assessments (PRAs) are mandatory processes that identify functional aspects of the software that require deep review. Such assessments must include the following information:

    1. (Security) Which portions of the project will require threat models before release?
    2. (Security) Which portions of the project will require security design reviews before release?
    3. (Security) Which portions of the project (if any) will require penetration testing by a mutually agreed upon group that is external to the project team?
    4. (Security) Are there any additional testing or analysis requirements the security advisor deems necessary to mitigate security risks?
    5. (Security) What is the specific scope of the fuzz testing requirements?
    6. (Privacy) What is the Privacy Impact Rating? The answer to this question is based on the following guidelines:
    • P1 High      Privacy Risk. The feature, product, or service stores or transfers PII,      changes settings or file type associations, or installs software.
    • P2      Moderate Privacy Risk. The sole behavior that affects privacy in the      feature, product, or service is a one-time, user-initiated, anonymous data      transfer (for example, the user clicks on a link and the software goes out      to a Web site).
    • P3 Low      Privacy Risk. No behaviors exist within the feature, product, or service      that affect privacy. No anonymous or personal data is transferred, no PII      is stored on the machine, no settings are changed on the user’s behalf,      and no software is installed.

    Phase Two: Design

    SDL Practice 5: Design Requirements

    The optimal time to influence a project’s design trustworthiness is early in its life cycle. It is critically important to consider security and privacy concerns carefully during the design phase. Mitigation of security and privacy issues is much less expensive when performed during the opening stages of a project life cycle. Project teams should refrain from the practice of “bolting on” security and privacy features and mitigations near the end of a project’s development.

    A formal exception or bug     deferral method should be considered as part of any software development     process. Many applications are based on legacy designs and code, so it may     be necessary to defer certain security or privacy measures as a result of technical     constraints.

    In addition, it is crucially important for project teams to understand the distinction between “secure features” and “security features.” It is quite possible to implement security features, which are in fact, insecure. Secure features are defined as features whose functionality is well engineered with respect to security, including rigorous validation of all data before processing or cryptographically robust implementation of libraries for cryptographic services. The term security features describes program functionality with security implications, such as Kerberos authentication or a firewall.

    The design requirements activity contains a number of required actions. Examples include the creation of security and privacy design specifications, specification review, and specification of minimal cryptographic design requirements. Design specifications should describe security or privacy features that will be directly exposed to users, such as those that require user authentication to access specific data or user consent before use of a high-risk privacy feature. In addition, all design specifications should describe how to securely implement all functionality provided by a given feature or function. It’s a good practice to validate design specifications against the application’s functional specification. The functional specification should:

    • Accurately and completely describe the intended use of a feature or function.
    • Describe how to deploy the feature or function in a secure fashion.

    SDL Practice 6: Attack Surface Reduction

    Attack surface reduction is closely aligned with threat modeling, although it addresses security issues from a slightly different perspective. Attack surface reduction is a means of reducing risk by giving attackers less opportunity to exploit a potential weak spot or vulnerability. Attack surface reduction encompasses shutting off or restricting access to system services, applying the principle of least privilege, and employing layered defenses wherever possible.

    SDL Practice 7: Threat Modeling

    The preferred method for threat modeling is to use the SDL     Threat Modeling Tool. The SDL Threat Modeling Tool is based on the STRIDE threat classification taxonomy.

    Threat modeling is used in environments where there is meaningful security risk. It is a practice that allows development teams to consider, document, and discuss the security implications of designs in the context of their planned operational environment and in a structured fashion. Threat modeling also allows consideration of security issues at the component or application level. Threat modeling is a team exercise, encompassing program/project managers, developers, and testers, and represents the primary security analysis task performed during the software design stage.

    Phase Three: Implementation

    SDL Practice 8: Use Approved Tools

    All development teams should define and publish a list of approved tools and their associated security checks, such as compiler/linker options and warnings. This list should be approved by the security advisor for the project team. Generally speaking, development teams should strive to use the latest version of approved tools to take advantage of new security analysis functionality and protections.

    SDL Practice 9: Deprecate Unsafe Functions

    Many commonly used functions and APIs are not secure in the face of the current threat environment. Project teams should analyze all functions and APIs that will be used in conjunction with a software development project and prohibit those that are determined to be unsafe. Once the banned list is determined, project teams should use header files (such as banned.h and strsafe.h), newer compilers, or code scanning tools to check code (including legacy code where appropriate) for the existence of banned functions, and replace those banned functions with safer alternatives.


    Generally speaking, development teams should decide the optimal frequency for performing static analysis – to     balance productivity with adequate security coverage.

    SDL Practice 10: Static Analysis

    Project teams should perform static analysis of source code. Static analysis of source code provides a scalable capability for security code review and can help ensure that secure coding policies are being followed. Static code analysis by itself is generally insufficient to replace a manual code review. The security team and security advisors should be aware of the strengths and weaknesses of static analysis tools and be prepared to augment static analysis tools with other tools or human review as appropriate.

    Phase Four: Verification

    SDL Practice 11: Dynamic Program Analysis

    Run-time verification of software programs is necessary to ensure that a program’s functionality works as designed. This verification task should specify tools that monitor application behavior for memory corruption, user privilege issues, and other critical security problems. The SDL process uses run-time tools like AppVerifier, along with other techniques such as fuzz testing, to achieve desired levels of security test coverage.

    SDL Practice 12: Fuzz Testing

    Fuzz testing is a specialized form of dynamic analysis used to induce program failure by deliberately introducing malformed or random data to an application. The fuzz testing strategy is derived from the intended use of the application and the functional and design specifications for the application. The security advisor may require additional fuzz tests or increases in the scope and duration of fuzz testing.

    SDL Practice 13: Threat Model and Attack Surface Review

    It is common for an application to deviate significantly from the functional and design specifications created during the requirements and design phases of a software development project. Therefore, it is critical to re-review threat models and attack surface measurement of a given application when it is code complete. This review ensures that any design or implementation changes to the system have been accounted for, and that any new attack vectors created as a result of the changes have been reviewed and mitigated.

    Phase Five: Release

    SDL Practice 14: Incident Response Plan

    Every software release subject to the requirements of the SDL must include an incident response plan. Even programs with no known vulnerabilities at the time of release can be subject to new threats that emerge over time. The incident response plan should include:

    • An identified sustained engineering (SE) team, or if the team is too small to have SE resources, an emergency response plan (ERP) that identifies the appropriate engineering, marketing, communications, and management staff to act as points of first contact in a security emergency.
    • On-call contacts with decision-making authority that are available 24 hours a day, seven days a week.
    • Security servicing plans for code inherited from other groups within the organization.
    • Security servicing plans for licensed third-party code, including file names, versions, source code, third-party contact information, and contractual permission to make changes (if appropriate).

    SDL Practice 15: Final Security Review

    The Final Security Review (FSR) is a deliberate examination of all the security activities performed on a software application prior to release. The FSR is performed by the security advisor with assistance from the regular development staff and the security and privacy team leads. The FSR is not a “penetrate and patch” exercise, nor is it a chance to perform security activities that were previously ignored or forgotten. The FSR usually includes an examination of threat models, exception requests, tool output, and performance against the previously determined quality gates or bug bars. The FSR results in one of three different outcomes:

    • Passed FSR. All security and privacy issues identified by the FSR process are fixed or mitigated.
    • Passed FSR with exceptions. All security and privacy issues identified by the FSR process are fixed or mitigated and/or all exceptions are satisfactorily resolved. Those issues that cannot be addressed (for example, vulnerabilities posed by legacy “design level” issues) are logged and corrected in the next release.
    • FSR with escalation. If a team does not meet all SDL requirements and the security advisor and the product team cannot reach an acceptable compromise, the security advisor cannot approve the project, and the project cannot be released. Teams must either address whatever SDL requirements that they can prior to launch or escalate to executive management for a decision.

    SDL Practice 16: Release/Archive

    Software release to manufacturing (RTM) or release to Web (RTW) is conditional on completion of the SDL process. The security advisor assigned to the release must certify (using the FSR and other data) that the project team has satisfied security requirements. Similarly, for all products that have at least one component with a Privacy Impact Rating of P1, the project’s privacy advisor must certify that the project team has satisfied the privacy requirements before the software can be shipped.

    In addition, all pertinent information and data must be archived to allow for post-release servicing of the software. This includes all specifications, source code, binaries, private symbols, threat models, documentation, emergency response plans, license and servicing terms for any third-party software and any other data necessary to perform post-release servicing tasks.

    Optional Security Activities

    Optional security activities are generally performed when a software application is likely to be used in critical environments or scenarios. They are often specified by a security advisor as part of a negotiated set of additional requirements to ensure a greater level of security analysis for certain software components. The practices in this section provide examples of optional security tasks and should not be considered an exhaustive list.

    Manual Code Review

    Manual code review is an optional task in the SDL and is usually performed by highly skilled individuals on the application security team and/or the security advisor. While analysis tools can do much of the work of finding and flagging vulnerabilities, they are not perfect. As a result, manual code review is usually focused on the “critical” components of an application. Most often it is used where sensitive data, such as personally identifiable information (PII), is processed or stored. It is also used to examine other critical functionality such as cryptographic implementations.


    Penetration Testing

    Any issues identified during penetration     testing must be addressed and resolved before the project is approved for     release.

    Penetration testing is a white box security analysis of a software system performed by skilled security professionals simulating the actions of a hacker. The objective of a penetration test is to uncover potential vulnerabilities resulting from coding errors, system configuration faults, or other operational deployment weaknesses. Penetration tests are often performed in conjunction with automated and manual code reviews to provide a greater level of analysis than would ordinarily be possible.

    Vulnerability Analysis of Similar Applications

    Many reputable sources of information about software vulnerabilities can be found on the Internet. In some cases, the analysis of vulnerabilities found in analogous software applications can shed light on potential design or implementation issues in software under development.

    Other Process Requirements

    Root Cause Analysis

    While not traditionally a part of the software development process, root cause analysis plays an important part in ensuring software security. Upon discovery of a previously unknown vulnerability, an investigation should be performed to ascertain precisely where the security processes failed. These vulnerabilities can be attributed to a variety of causes, including human error, tool failure, and policy failure. The goal of root cause analysis is to understand the precise nature of the failure. This information helps to ensure that errors of the same type are accounted for in future revisions of the SDL.

    Periodic Process Updates

    Software threats are not static. As a result, the process used to secure software cannot be static. Organizations should take the knowledge learned from practices such as root cause analysis, policy changes, and improvements in technology and automation, and apply them to the SDL on a predictable schedule. Generally speaking, a yearly update schedule should suffice. The exception to this rule is when new, previously unknown vulnerability types are identified. This phenomenon requires immediate, out-of-cycle revision of the SDL to ensure proper mitigations are in place going forward.

    Application Security Verification Process

    Organizations developing secure software will naturally want a means to verify that the processes outlined in the Microsoft SDL have been followed. Access to centralized development and test data helps decision-making in a number of important scenarios, such as the Final Security Review, SDL requirement exception handling, and security audits. The process of verifying application security involves a number of different processes and actors:

    • A specially designated application should be used to track compliance with the SDL. This application serves as the central repository for all SDL process artifacts, including (but not limited to) design and implementation notes, threat models, tool log uploads, and other process attestations. As with any critical application, it should use access controls to ensure:
    • Only authorized personnel can use the application.
    • Strong separation between roles. For example, a developer may be able to use the application and upload data, but should be prohibited from accessing functionality reserved for the security and privacy advisors, security team leads, and testers.
      • The security and privacy team leads are responsible for ensuring that the data necessary for an objective judgment is properly categorized and entered into the tracking application.
      • The information entered into the tracking application is used by the security and privacy advisors to provide the analysis framework for the Final Security Review.
      • The security and privacy advisors are responsible for reviewing the data entered into the tracking application (including the FSR results and other additional security tasks assigned by the advisors) and certifying that all requirements are met and/or all exceptions are satisfactorily resolved.

    This document focuses on the Advanced level of the SDL Optimization Model, where rudimentary tracking processes are (in most instances) insufficient to the task. However, organizations with less sophisticated processes or smaller resource pools—those who fit within the Basic or Standardized levels of the SDL Optimization Model—can likely make do with a simpler tracking process.

    It is very important that the tracking and verification process accurately capture:

    • The security and privacy requirements of the organization (for example, no known critical vulnerabilities at release).
    • The functional and technical requirements of the application under development.
    • The application’s operational context.

    For example, if a development team creates a process control application to run in a critical environment, the proper investment of time and resources must be allocated to the creation and maintenance of the tracking process to enable objective analyses by the organization’s security and privacy principals, executive leadership, and relevant third parties such as compliance auditors or evaluators.

    Put differently, skimping on the tracking process inevitably leads to problems later, usually during an emergency. Ensure that reliable systems are in place to answer critical questions at critical times.

    Resources :

    Home page of the SDL – http://www.microsoft.com/security/sdl/default.aspx

    Free integration guide -Microsoft Dynamics CRM Online and Office 365

    Combining the online services of Office 365 with Microsoft Dynamics CRM Online empowers your teams to work where and when they want with best-of-class cloud services.

    This guide is intended for Microsoft Dynamics CRM administrators and technical decision makers interested in exploring Office 365 services and how they integrate with Microsoft Dynamics CRM Online. Integration with Office 365 becomes increasingly relevant to

    Microsoft Dynamics CRM Online users as management of Microsoft Dynamics CRM Online shifts to the Microsoft online services environment.

    For a .pdf version of this document: Integration Guide: Microsoft Dynamics CRM Online and Office 365 please visit – http://download.microsoft.com/download/D/4/F/D4F5A3C3-E3CB-48C9-85DE-4ED0B7FFBD60/CRMO365Integration.pdf

    Some of what this paper covers:

    • Add an Office 365 trial subscription to Microsoft Dynamics CRM Online
    • Set up CRM Online to use Exchange Online
    • Set up CRM Online to use SharePoint Online
    • Set up CRM Online to use Lync Online
    • Set up CRM Online to  use Yammer

    Brand NEW “My Latest Documents” SharePoint Web Part and App released and available!!

    In each SharePoint Team site where we have multiple document libraries, the requirement is always there to see what the latest changes are. Unfortunately the Microsoft web part allows only seeing the documents changed by myself.

    To be able to have a solution for that, where you haven’t to be administrator or owner, I created a definition for a recent document web part. This can be deployed on Office 365, SharePoint 2010 and SharePoint 2007 sites. On SharePoint 2007 and SharePoint 2010 the only use right needed, is to be able to modify the site.

    The goal of the web part was:

    • Show the 10 latest changed documents
    • Show a more button that displays additional 40 documents
    • Display the online status of users
    • Display the correct date format of each site
    • Display the name of the folder where the document is stored and a link to the folder.
    • Get documents recursively from all sub sites

    Example Image:

    The following instructions explain in detail how you can activate it:

     

    Activation SharePoint 2010

    1. Edit your webpage and add a new web part
    2. Select browse and upload a the webpart definition

    3. Click Upload

    4. Now, it’s a bit confusing, but you have to click again add new web part
    5. The upload web part is now available in your web part menu and you can add it.

    All this steps have to be done each time when you want to add the web part. To provide it for all site owners add it to the web part gallery.

    Activation on Office 365

    That means you have to upload it to your web part gallery:

    After uploading the web part is available on your site.

    You can simple edit the site, and click More Web Parts

    Afterwardy you can find it in the Default Web Parts folder.

     

    Contact me NOW at tomas.floyd@outlook.com to order this brand new Web Part and/or App

    Create a new Search Service Application in SharePoint 2013 using PowerShell

    The search architecture in SharePoint 2013 has changed quite a bit when compared to SharePoint 2010. In fact the Search Service in SharePoint 2013 is completely overhauled. It is a combination of FAST Search and SharePoint Search components.

    apxvsdik

    As you can see the query and crawl topologies are merged into a single topology, simply called “Search topology”. Provisioning of the search service application creates 4 databases:

    • SP2013_Enterprise_Search – This is a search administration database. It contains configuration and topology information
    • SP2013_Enterprise_Search_AnalyticsReportingStore – This database stores the result of usage analysis
    • SP2013_Enterprise_Search_CrawlStore – The crawl database contains detailed tracking and historical information about crawled items
    • SP2013_Enterprise_Search_LinksStore – Stores the information extracted by the content processing component and also stores click-through information

    # Create a new Search Service Application in SharePoint 2013

    Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue

    Settings    $IndexLocation = “C:\Data\Search15Index” #Location must be empty, will be deleted during the process!     $SearchAppPoolName = “Search App Pool”     $SearchAppPoolAccountName = “Contoso\administrator”     $SearchServerName = (Get-ChildItem env:computername).value     $SearchServiceName = “Search15”     $SearchServiceProxyName = “Search15 Proxy”     $DatabaseName = “Search15_ADminDB”     Write-Host -ForegroundColor Yellow “Checking if Search Application Pool exists”     $SPAppPool = Get-SPServiceApplicationPool -Identity $SearchAppPoolName -ErrorAction SilentlyContinue

    if (!$SPAppPool)    {         Write-Host -ForegroundColor Green “Creating Search Application Pool”         $spAppPool = New-SPServiceApplicationPool -Name $SearchAppPoolName -Account $SearchAppPoolAccountName -Verbose     }

    Start Services search service instance    Write-host “Start Search Service instances….”     Start-SPEnterpriseSearchServiceInstance $SearchServerName -ErrorAction SilentlyContinue     Start-SPEnterpriseSearchQueryAndSiteSettingsServiceInstance $SearchServerName -ErrorAction SilentlyContinue

    Write-Host -ForegroundColor Yellow “Checking if Search Service Application exists”    $ServiceApplication = Get-SPEnterpriseSearchServiceApplication -Identity $SearchServiceName -ErrorAction SilentlyContinue

    if (!$ServiceApplication)    {         Write-Host -ForegroundColor Green “Creating Search Service Application”         $ServiceApplication = New-SPEnterpriseSearchServiceApplication -Partitioned -Name $SearchServiceName -ApplicationPool $spAppPool.Name  -DatabaseName $DatabaseName     }

    Write-Host -ForegroundColor Yellow “Checking if Search Service Application Proxy exists”    $Proxy = Get-SPEnterpriseSearchServiceApplicationProxy -Identity $SearchServiceProxyName -ErrorAction SilentlyContinue

    if (!$Proxy)    {         Write-Host -ForegroundColor Green “Creating Search Service Application Proxy”         New-SPEnterpriseSearchServiceApplicationProxy -Partitioned -Name $SearchServiceProxyName -SearchApplication $ServiceApplication     }

    $ServiceApplication.ActiveTopology     Write-Host $ServiceApplication.ActiveTopology

    Clone the default Topology (which is empty) and create a new one and then activate it    Write-Host “Configuring Search Component Topology….”     $clone = $ServiceApplication.ActiveTopology.Clone()     $SSI = Get-SPEnterpriseSearchServiceInstance -local     New-SPEnterpriseSearchAdminComponent –SearchTopology $clone -SearchServiceInstance $SSI     New-SPEnterpriseSearchContentProcessingComponent –SearchTopology $clone -SearchServiceInstance $SSI     New-SPEnterpriseSearchAnalyticsProcessingComponent –SearchTopology $clone -SearchServiceInstance $SSI     New-SPEnterpriseSearchCrawlComponent –SearchTopology $clone -SearchServiceInstance $SSI

    Remove-Item -Recurse -Force -LiteralPath $IndexLocation -ErrorAction SilentlyContinue    mkdir -Path $IndexLocation -Force

    New-SPEnterpriseSearchIndexComponent –SearchTopology $clone -SearchServiceInstance $SSI -RootDirectory $IndexLocation    New-SPEnterpriseSearchQueryProcessingComponent –SearchTopology $clone -SearchServiceInstance $SSI     $clone.Activate()

    Write-host “Your search service application $SearchServiceName is now ready”

    Update

    To configure failover server(s) for Search DBs, use the following PowerShell:

    Thanks to Marcel Jeanneau for sharing this!

    #Admin Database   $ssa = Get-SPEnterpriseSearchServiceApplication “Search Service Application”    Set-SPEnterpriseSearchServiceApplication –Identity $ssa –FailoverDatabaseServer <failoverserveralias\instance>

    #Crawl Database   $CrawlDatabase0 = ([array]($ssa | Get-SPEnterpriseSearchCrawlDatabase))[0]    Set-SPEnterpriseSearchCrawlDatabase -Identity $CrawlDatabase0 -SearchApplication $ssa -FailoverDatabaseServer <failoverserveralias\instance>

    #Links Database   $LinksDatabase0 = ([array]($ssa | Get-SPEnterpriseSearchLinksDatabase))[0]     Set-SPEnterpriseSearchLinksDatabase -Identity $LinksDatabase0 -SearchApplication $ssa -FailoverDatabaseServer <failoverserveralias\instance>

    #Analytics database   $AnalyticsDB = Get-SPDatabase –Identity     $AnalyticsDB.AddFailOverInstance(“failover alias\instance”)    $AnalyticsDB.Update()

    You can always change the default content access account using the following command:

    $password = Read-Host –AsSecureString**********Set-SPEnterpriseSearchService -id “SSA name” –DefaultContentAccessAccountName Contoso\account –DefaultContentAccessAccountPassword $password

    Look out for my Powershell Web Part and Google Analytics Web Part and App that is under Development and available soon for purchase!!

    Image

    Building solutions on Duet Enterprise – What does this mean exactly?

    Any kind of enterprise application can be built on top of Duet Enterprise, e.g. to support particular business activities for Product Lifecycle Management (PLM), Supply Chain Management (SCM), Customer Relationship Management (CRM), … etc .

    However the idea here is not to rebuild application interfaces that already exist. Instead through Duet Enterprise we provide a set of application building blocks from which new kinds of solutions can be composed. We expect that these new kinds of solutions will enable collaboration, communications, or content management in Office and SharePoint in extension to the core SAP business processes.

    The consumers of these solutions would be information workers who need access to SAP information and services as they perform their business activities, or who participate in business processes managed in SAP, but are not typically power users of the SAP interfaces.

    duet5

    The building blocks that we will provide through Duet include both functional building blocks (e.g. application components for SAP data models like Customer, Product, Employee, … etc.) , as well as infrastructural building blocks (e.g. for Reporting, Workflow, Collaboration, … etc.).

    These infrastructural building blocks are what Christian described as ‘Ready To Use’ capabilities in his blog posting earlier.

    So an example of a solution built on Duet Enterprise might be an extension to inventory management processes in SAP to allow ad-hoc collaboration and reporting in SharePoint around alerts and notifications raised from a warehouse or a factory floor.

    However while Duet Enterprise provides a new set of building blocks, it does not add a new set of tools. Solution builders can leverage existing skillsets, and use standard tools like Visual Studio, SharePoint Designer, and ABAP Workbench. They would use these tools to compose these building blocks into new kinds of solutions that support a particular type of business activity.

    This solution would typically combine ad-hoc collaboration in SharePoint with structured process in SAP, and bring in contextual business data and reports to support decision making for that activity.

    Let us take a specific example. In today’s enterprises there are typically virtual teams to service key customer accounts. A particular account v-team would include employees who report into different organizational units, e.g. Sales, Marketing, Support … etc., and who work out of different geographic locations.  While organizationally separate, these v-teams need a way to collaborate among themselves, and to manage their working activities.

    Out-of-the-box with Duet Enterprise we will ship a set of composites, one of which is the customer collaboration workspace for account teams. This will enable any person with access to account information in SAP, to browse the list of customer accounts within SharePoint, and for any account request access to a collaboration workspace.

    This is a SharePoint site that gets auto-provisioned by the Duet Foundation the first time that the workspace is requested by a member of the account v-team, using the packaged composite that ships with Duet Enterprise. The site comes pre-wired with connections to SAP information and services.

    For example the account v-team members can view and edit account related information that is managed by SAP, as well as view reports related to the account, and can take for granted that the Duet Foundation provides the integrated security and administrative support which is necessary to support such activities.

    Now consider the design time experience to support the flow described above. The composite for the customer collaboration workspace which we provide in Duet Enterprise is a collection of building blocks that have been assembled together in a particular way. This provides a starting point for further customization and solution building.

    For example, there is a set of web services that are exposed from within the Duet Enterprise SAP Add-on (e.g. Customer, Sales Contact, … etc.), there is a set of External Content Types provided for SharePoint (that map to the SAP web services), and there is a  SharePoint site definition which provides a template for the customer collaboration workspaces which are auto-provisioned for v-team members.

    In addition there are SharePoint Features for the ready to use capabilities (e.g. Related Reports) that can be stapled to the Site Definition. So a solution builder can customize the composite by extending the web services and the external content types, by adding new SharePoint web parts or Features if needed, and by customizing the site definition files for the workspace, or even creating entirely new site definitions.

    Finally the solution builder leverages hooks in the Duet Foundation to plug in these new or customized composites.

    Contrary to popular beliefs – The SAME skills are used to develop SharePoint On-premise and SharePoint Online Apps

    Taking an on-premise application and deploying it on a Windows Azure Virtual Machine should be straight forward. The majority of modifications required, include changing configurations in order to accommodate differences in the Virtual Machine’s configuration.

    Going to the cloud on a single Virtual Machine is like crossing the ocean on a row boat. You’ll probably get there, but I can’t guarantee anything.

    row boat

    Once you’ve deployed your application to the cloud, things get interesting because you have opportunities that allow you to solidify your application.

    If you are building your application on the Cloud instead of building your application for the Cloud, you’re doing it wrong!

    Taking advantage of services offered on Windows Azure allows you concentrate on creating value for your business. The Windows Azure teams takes care of lots of boring stuff like disaster recovery. To me, build applications for the cloud is equivalent of going from a row boat to a fleet of navy destroyers. (Alright, I may be a little overconfident, but you get the picture.)

    going to war

    Preparing applications to go web scale isn’t a trivial task. Each application is unique and requires different services. You will need to go over your objectives and build accordingly.

    Although the planning phase isn’t trivial, augmenting your applications with cloud services isn’t as complicated as some might think. The Windows Azure teams provide an amazing amount of support materials like hands on labs, tutorials and a rich collection of documentation.

    Architectural Considerations

    Throughout the planning phase of an application on the cloud there are a few architectural strategies that require some extra attention.

    Cloud Design Patterns

    While designing or augmenting cloud based applications, the following patterns should be consider. While this list isn’t complete, it should be used as a starting point.

    For more patterns, take a look at the resources listed at the bottom of this post.

    • Cache-aside Pattern – This is a common technique that we can use to improve the performance and scalability of a cloud solution by temporarily copying frequently accessed data to fast storage located close to the application.
    • Queue-based Load Leveling Pattern – Cloud solutions are submitted to very unpredictable loads and require protection against their own success. By placing queues between clients and the workers who execute tasks, you are protecting yourself against spikes.
    • Competing Consumers Pattern – Enable multiple Cloud Service instances to retrieve messages from the same source.
    • Compute Resource Consolidation Pattern – Consolidate multiple tasks or operations into a single computational unit (Roles, Virtual Machines, Web Sites).
    • Eventual Consistency – Cloud solutions use data that’s dispersed and duplicated across data stores, managing and maintaining data consistency can become a major bottleneck.
    • Leader Election Pattern – A great way to coordinate actions being performed by a group of Cloud Service instances is to elect a leader that can act as the coordinator. This is extremely useful for maintenance tasks and singleton tasks that need fallbacks.
    • Materialized View Pattern – This has got to be one of my favorite cloud patterns. The solutions’ data may not be formatted in a way that favors our query requirements. In order to optimize our queries, it may be desirable to generate pre-populated views whose shapes correspond with our requirements.
    • Pipes and Filters Pattern – We should strive to decompose complex tasks into a series of discrete elements that can be reused.

    Succeeding on the cloud is all about architecture, using the right service for the right reasons. This can be a challenge because cloud platforms are continuously evolving. Windows Azure is currently (Jan 2014) on a 3 week release cycle and it can be quite a challenge for all of us to keep up. Fortunately there are blogs, podcasts and online courses that help us along the way.

    Learn More

    New “Spotlight On” Web Part Released and Available!!

    The “Spotlight On..” Web Part selects a random entry from the specified Sharepoint Library and displays a picture, a title and an abstract of the selected person or item.
    The Web Part can be used with Windows Sharepoint Services V3, MOSS 2007, Sharepoint 2010 and Sharepoint 2013.You can configure the following web part properties:

    • the Sharepoint Library
    • the List fields corresponding to the picture, title, abstract and detail link
    • enable or suppress the “Details..” URL.
    • show a new entry every day or on every page refresh

    This allows you to display random data contained in any Sharepoint List by specifying the desired Sharepoint List name and the desired list column names.

    Image

    Image

     Instructions:

    1. download the Spotlight On Web Part Installation Instructions (PDF file, see above)
    2. either install the web part manually or deploy the feature to your server/farm as described in the instructions.
    3. Create a new Sharepoint Picture Library if you do not intend to use an existing Picture Library.
      If you decide to create a new Sharepoint list to store the Spotlight entries, create a new Sharepoint Picture Library anywhere in the Sharepoint site collection (the web part is able to access any picture library within the site collection).
      The list needs the following columns to hold the entries:
      – Title
      – Abstract
      – optional Detail Link URL
    1. You also can use a Sharepoint List (as opposed to a picture library). In this case please add the pictures as list attachments.
    2. Configure the following relevant Web Part properties in the Web Part Editor “Miscellaneous” pane section as needed:
      • Site Name: Enter the name of the site that contains the Spotlight Picture Library:
        – leave this field empty if the Library is in the current site (eg. the Web Part is placed in the same site)
        – Enter a “/” character if the Library is contained in the top site
        – Enter a path if the Library is in a subsite of the current site (eg. in the form of “current site/subsite”)
      • List Name: Enter the desired Sharepoint Picture Library
      • View Name: Optionally enter the desired List View of the list specified above. A List View allows you to specify specific data filtering and sorting.
        Leave this field empty if you want to use the List default view.
      • Title Field Name: Enter the desired Library Column name that contains the titles (Default=”Title”)
      • Abstract Field Name: Enter the desired Library Column name that contains the abstracts (Default=”Abstract”)

      Image

    You can alternatively specify a “Field Template” by entering the desired Library fields (surrounded by curly braces). You can specify HTML tags and CSS styles to freely format the text.
    Example:

    <strong>{JobTitle}</strong>
    <br>{Description}

    5px; margin-top:5px; background-color:orange”>

    <strong>Schools:</strong><br>
    {Bio}
    </div>

    Image
    • The above example assumes that the Sharepoint Library includes a “JobTitle”, a “Description” and a “Bio” column.
    • Details URL Field Name: (optional) Enter the desired Library Column name that contains the Detail page links (Default=”DetailURL”). Leave this field empty if you don’t want to provide a detail link.
      If you want to automatically link to the corresponding Sharepoint List Detail View page, enter the keyword “DetailView” into this field.
      If you want to automatically link to the corresponding user’s “MySite” page, enter the keyword “MySite” into this field.
    • Open Details Link in new window: opens the link in a new browser window.
    • Details Caption: allows to localize the “Details..” link displayed in the lower right part of the web part (if a “Details” link is specified).
    • Text Layout: specify the placing of the Text with respect to the Image:
      – Right
      – Wrap
      – Bottom
      – Left
      – WrapLeft
    • Image Height: specify the image height in pixels. Enter “0” if you want to use the default picture size.
    • Default User Image: (optional) specify a default user picture (if there is no user picture available) by entering a relative URL to the image
    • Example:
      /yoursite/yourPictureLibrary/yourDefaultUser.jpg
    • Title CSS Style: enter optional CSS styles to format the Title (default: bold)
    • Text CSS Style:  enter optional CSS styles to format the Body Text (default: none)
    • Background Color (optional):  To set the desired Web Part Background color, enter either a HTML color name (as eg. “yellow”) or a hexadecimal RGB color value (as eg. “#ffcc33”). Leave this field empty if you don’t want to use a specific background color.
    • Show new Entry: shows a new entry depending on the below setting:
      – always (a new entry is displayed on every page refresh)
      – every Day
      – every Week
      – every Month
      – top Entry (the most recently added entry unless a View is used with a specific custom sorting order)
    • Show specific Entry: optionally enter the List ID of the List Item to be displayed.
    • Nbr. of Items to show: optionally enter two ore more items to be displayed side by side:
    • Center Web Part: horizontally centers the Web Part within the available space.
    • License Key: enter your Product License Key (as supplied after purchase of the “Spotlight On Web Part” license key).
      Leave this field empty if you are using the free 30 day evaluation version. You can check the evaluation period via the web part configuration pane.

    Image

    Contact me now on tomas.floyd@outlook.com to get o trial/copy of this Web Part (Also available as an Apo)

    http://gravatar.com/sharepointsamurai

    My Resume – Senior C#, SharePoint Developer with 9 years experience – In the job market

    Enhanced by Zemanta

    How to use “Accordion” JQuery plug-in on SharePoint 2013

    For this solution created by 2 steps:

    • Create a link list in SharePoint Site and call Script Editor to generate the list View using SharePoint COM Script/Jquery.
    • Create a App solutions (SharePoint-Hosted) that creates a custom Link List and integrate a View with the created SharePoint COM Script/Jquery
    • Create a Client Web Part (Host Web) to List the View using the Script List using REST
    Expected Result:

    First Step:

    Create the Script using Jquery(Accordion)/COM to call List Data in SharePoint Site

    Add Custom View using Jquery and SharePoint Site

    For this example have selected the “Accordion” Jquery plug-in example to apply in the SharePoint 2013 enviroment, this example will be use to display List data and automatic sort order using Drag and Drop.
    Access to your SharePoint Site and add a “Links App”, include the following column “OrderLink” as Numeric and default value ‘0’.
    The Second step will be add a Script Editor in the Home Page and include the script to create our new View.

    Increase the Textbox in Script Editor

    One question people make about the Script Editor and how small it is, well this script is normally use from external references for example:“Embeded Youtube Videos” or references to embed Office Documents.
    If you want to increase height because you are adding a very extensive script can recomend to include in the Snippet a style for the class “.ms-rte-embeddialog-textarea” and increase the height, for this example was increase to 500px.
    PS:This is a example, my recomendation is to use a minify Javascript file and make a reference to him.

    Include the Jquery reference files

    To work with the “Accordion” Plug-in, some references needs to be made, for this case i added some JS in the SiteAssets Library, but for the support file change to the existing UI page.
    Define the Look and Feel of the HTML that will be generated dynamically by JS and changed by the Accordion Plug-in.

    Define the Method where will be call that changes the look and feel as “Accordion”.
    The Property “ListID=’ListitemID’” will be very important when a item is deleted, the Jquery will delete the DIV Tag and will not be needed a refresh of the Page.
    For this example have 2 Methods:
    • Users will be able to expand and collapse items.
    • User will be able to reorder the items if they have permissions
    There are some variables that can be changed in order to response other List name, filter Column or row Limit.
    You can download the file with all the Script in the following link.
    PS: this code was made in 15 minutes and was not optimized.
    After the inclusion of the code you should be able to add new items, edit, delete and reorder the position of the items in a more flexible way and the the final result should be something like this:
    There is also the possibility to include the JS script code in the View to change the Out of the Box Look and Feel to our new View (needs to be made some changes, that will be shown in the app Solution).

    Reoder of the Listitem

    After you drag and drop the columns in the new View the  OrderLink will be update with the new Order like is shown in the following image. Step 2:

    Create a SharePoint App to create new View with Jquery

    The second part of this example is the creating of a SharePoint app call “Custom Links”, using “SharePoint-Host” as support to our application.
    This app will create a link in the Main site call “Custom Links” where will redirect to our custom Support Application.
    The first thing that we should do is to open and create our App for SharePoint 2013.
    This Solution will include the following Main Files:
    • Pages
      • WPReorder Web Part
      • WPReorder.ASPX
    • SharePoint List
      • ListTemplate: 10000
      • Custom Action:ScriptLink :jquery-1.8.2.min.js
      • Custom Action:ScriptLink :jquery-ui.js
    • Images
    • Scripts
      • (OOB Visual Studio Files)
      • Jquery* (Files)
      • ReorderLinks.js
      • WPReorder.js
    • AppManifest.xml
    After the Project is created, Visual Studio 2012 will add some default files and pages that can be use to create our App.
    For this example the AppManifest.xml was changed to change the Starting Page to our custom View
    The second action will be create the Custom List call “ReorderLinks”.
    After the Support files are created, you will be able to make some changes in the List definition and proportieslike adding fields.
    One option the graphical option dont have is the definition of the Default Value for the Fields, for this example “OrderLink, for that you will need to change the Schema.xml file and include the following XML “0”.
    Another option of the Visual Studio 2012 List Management is the definition of the Views and the fields to be Displayed.
    Here is very useful to define which one should be the Default.
    For this example was created 2 Views:
    • All Items
    • Reorder
    One thing that should be done after the creation of the Views, it’s the option on how the View will be Display, for example you are able to use Javascript file to add your custom View “like the example bellow” or to use the .XSL to customize the Look and Feel of the View.
    For this example i have define don’t want to use the OOB “main.xsl” but a custom JS file call “ReoderLinks”.
    Another thing that was included in the Schema of the List was the inclusion of the Jquery support files to be accessible in the “SharePoint-Host” Site.

    After the definition of the List, Views and support Files, we can start to create the code that will change the View “Reorder” Look and Feel.
    SharePoint 2013 has new class that can be use to override the existing style and use the Context to access some List Content.
    You can find a example in the following Microsoft Article.
    This example is creating the HTML tags to include a custom CSS and create the new Look and Feel and include the Jquery plug-in “Accordion” in the New View.

    Change Look and Feel of Custom View

    The method to make the change is the Method CustomItem(ctx). This Method brings the Object ctx associated with the properties of the List, if you need to know some of the properties you can use “Chrome > Sources>Reoder.aspx”, this page gives a look of the ctx(List) properties.

    Validate Permissions

    This code will validate if the user has permission to edit the List content and more functionalities like add/Edit/Delete/reordering of the List.

    Reordering of the Items

    The next code will update the reordering of the List, with a new Method and my recomendation will be to use “SP.SOD.executeFunc” to ensure the SharePoint Client Object files are included in the Page.
    By Default the Values are 0 after the Reordering using drag and drop will have a different ordering.

    Delete of Item

    For the Delete Action was made a Jquery function to delete DIV Tag of the ListItemID.

    Change the View

    Since we are working over a List we are able to select the Ribbon option associated with the List and change the View to a different View.

    Create a Client Web Part

    This Client Web Part will support to display the content of the “Reoder Link” from the “SharePoint-Host” to the Production “SharePoint Site” using a Iframe.
    This Web Part will have the “Accordion” look and Feel but will not have administration actions, because it’s made in a Iframe.
    The fist thing made in Visual Studio was to include a  Client Web Part Feature.
    This feature will create a ASPX page, with some OOB HTMl Tag and references.
    for this example was included a new File “WPReoder.js” where will be the Script do display the content and the DIV Tag associated with the accordion.
    For the Client Web Part example was made some change in the Way to call the Data from the Jquery Call using REST Url.

    Using fiddler you can use to make calls to REST Url and validate the content of the JSON and properties Data that returns.
    For the output was created a Object Data where all Items are listed and generate the necessary tags.
    After our SharePoint App is deployed, you will be able to add the App Part in your Production SharePoint Page as a “App part” call Reoder Links.
    After you add the Web Part will make the call to “ReoderLinks” List using Cross-Domain code and display the data in the Production SharePoint Site.
    The solution can be acessible in the following link.
    There are a lot of things talk in the article that needs to be aware when you are using SharePoint Apps but didn’t have time to explain.

    New from the ALM Rangers – Better Unit Testing with the Fakes framework

    This is Part 1 in a Series about the Microsoft Agile SDLC

    Unit Testing – With Fakes

    For modern development teams, the value of effective and efficient unit testing is something everyone can agree on.

    Fast, reliable, automated tests that enable developers to verify that their code does what they think it should, add significantly to overall code quality. Creating good, effective unit tests is harder than it seems though.

    A good unit test is like a good scientific experiment: it isolates as many variables as possible (these are called control variables) and then validates or rejects a specific hypothesis about what happens when the one variable (the independent variable) changes.

    Creating code that allows for this kind of isolation puts strain on the design, idioms, and patterns used by developers. In some cases, the code is designed so that isolating one component from another is easy.

    However, in most other cases, achieving this isolation is very difficult. Often, it’s so difficult that, for many developers, it is unachievable.

    First included in Visual Studio 2012, Microsoft Fakes helps you — our developers — cross this gap. It makes it easier and faster to create well-isolated unit tests when you do have systems that are “testable,” letting you focus on writing good tests and not on test plumbing.

    It also enables you to isolate and test code that is not traditionally easy to test, by using a technology called Shims. Shims use runtime interception to let you detour around challenging dependencies and replace them with something you can control.

    As we have mentioned, being able to create this control variable is imperative when creating high-quality, fast-running unit tests.

    Shims provide a very powerful capability that will let you circumvent all kinds of roadblocks when unit testing your code. As with all powerful tools, there are a number of patterns, techniques and other “gotchas” that can take time to learn.

    This guidance document provides you with a jump-start on acquiring that knowledge by sharing a large number of examples and techniques for effectively using Microsoft Fakes in your projects.

    We are happy to introduce this excellent guidance document produced by the Visual Studio ALM Rangers.

    We are sure that it will help you and your team realize the power and capabilities Microsoft Fakes provides you in creating better unit tests and better code.

    FREE Ebook : http://aka.ms/ebookFakes

    Publishing apps for Office and SharePoint to Windows Azure Websites

    This post will focus on provider-hosted apps for SharePoint and apps for Office. Provider-hosted (as opposed to SharePoint-hosted or Autohosted) means that the developer is responsible for hosting the web content – which is precisely where Azure Websites can help. At the end of the post, I will also look at advanced topics, including options for publishing to a non-Azure environment (like an on-premise server).

    Direct web publishing to Azure

    Creating a profile

    Suppose you have an app for SharePoint or an app for Office that you’re ready to publish for the first time. To begin publishing your app, choose the app for SharePoint or app for Office project, and choose “Publish”.

    Figure 1. Publish menu in Solution Explorer
    Figure 1. Publish menu in Solution Explorer

    A guided publishing experience will appear, as shown below.

    Figure 2. Guided publishing experience
    Figure 2. Guided publishing experience

    For a new project, there is no current publishing profile. You can create one by selecting <New…> from the profile dropdown, which will open the following dialog box.

    Figure 3. Creating a new publishing profile
    Figure 3. Creating a new publishing profile

    If you’re publishing to Azure, choose the “download your publishing profile” link, and you’ll be redirected to the Azure portal. There, if you have not already, you can create a new website by choosing the +NEW at bottom-left corner of the portal. The bottom portion of the screen will expand, allowing you to create a new website via the Quick Create or Custom Create menu items.

    Figure 4. Creating new website on Azure portal
    Figure 4. Creating new website on Azure portal

    Once the website entity is created, choose it from the list of websites to reveal the website details. Then choose Download the publish profile and save the profile to your computer. The profile contains all the information necessary to deploy your web content, including any auxiliary information like linked database connections.

    Figure 5. Downloading the publish profile from the Azure portal
    Figure 5. Downloading the publish profile from the Azure portal

    Back in the Visual Studio dialog box, and with the import publishing profile option still selected, choose the “browse” button and browse to the newly-downloaded file. Depending on the type of app:

    • For apps for Office, the profile is now complete.
    • For apps for SharePoint, you can now configure the Client ID and Client Secret on the second page of the wizard. These values uniquely identify your app to SharePoint, and allows the app to access SharePoint data. Client IDs and Secrets are generated and registered automatically when you debug your app, but they must be registered in a more permanent fashion when publishing the app. To do so:

    At the completion of either registration process, you will be granted a Client ID and Client Secret. Transfer those values into the Profile-creation dialog, and then choose Finish.

    Deploying and packaging

    Once the profile is set, the publishing buttons will activate. You now have a choice of deploying the web project and/or packaging the app. When publishing for the first time, you will need to do both – but it generally makes sense to start with deploying the web project first.

    Deploy your web project

    Deploying your web project is exactly what it sounds like: it will publish the entire contents of your web project (but not the SharePoint app or Office manifest) to the web. To do this, choose deploy your web project and you will see the familiar web publishing experience – complete with Preview, deployment settings, and more. The Connection tab has been pre-filled with information from your publish profile, and you can go to Settings to customize the publish configuration and options like Remove additional files at destination. Note that if your project requires a database, you can set it on the Settings page of the Wizard – and that, if your publish profile came from Azure, you can simply choose the database from the dropdown list.

    Figure 6. Deploying a web project to Azure
    Figure 6. Deploying a web project to Azure

    The Preview functionality is helpful to ensure you’re publishing the right set of files. By choosing a file in the Preview list, you can see the impending changes that you’re about to commit to your live site.

    Figure 7. Preview functionality in Visual Studio
    Figure 7. Preview functionality in Visual Studio

    Packaging your app

    Once the web project is deployed, packaging the app is designed to be simple, and most fields should be pre-populated. If you used a publishing profile, the URL will already be pre-populated, though you’ll need to change the URL from “http” to “https”. Note that with Azure Websites, https hosting is automatically included for any website hosted on *.azurewebsites.net (for custom domains or other hosting providers, you may need to follow additional steps).

    For apps for Office, that’s all you need: Just choose Finish, and a manifest file that points to your live web content will get generated for you. For apps that you wish to sell on the Office Store, see the next section. Otherwise, if it’s just an in-house app, you can upload the app to a file share or to a corporate catalog.

    For apps for SharePoint, you will need to provide or confirm the Client ID, which you may have already entered during the profile-creation step. After that, click “finish” – and an app package will get created for you. Again, see the next section for apps headed for the Office Store. Otherwise, if you only intend to distribute the app to users of your SharePoint site, follow the documentation for uploading the app to a SharePoint corporate catalog.

    Publishing to the Office Store

    After your app package (apps for SharePoint) or manifest file (apps for Office) is created, you can use the Visit the Seller Dashboard button to get started with publishing to the Office Store.

    For apps for Office, you can also run your app through a validation utility, which will catch common mistakes (like not specifying required information in the manifest). This will save you time and hassle when submitting the app to the Store.

    Figure 8. Validation utility for apps for Office
    Figure 8. Validation utility for apps for Office

    Upgrading an app

    When it comes time to upgrade an existing app that you have already published, what steps do you need to take?

    For both apps for Office and apps for SharePoint, if all that you’ve updated is just in the web project, you can just re-publish the web content via the Deploy your web project button. These changes will go live immediately, and you’re fully in control of deploying these whenever you’d like.

    If you made changes to the app manifest (apps for Office and apps for SharePoint) or if you have modified any SharePoint artifacts (lists, event receivers, or anything outside the web project), you need to re-publish those artifacts instead via the Package the app button. If your app is listed on the Office Store, you will then need to re-submit to the produced app package or app manifest to the Store, so it may take a few days before those changes go live – and remember that applying an update is at the discretion of the user.

    In general, remember to be considerate of the upgrade impact when modifying anything outside of the web project. Especially for apps for SharePoint, which have a more involved upgrade process, see the Apps for SharePoint update process article for an in-depth upgrade discussion and for guidance on how to avoid breaking older app packages when deploying new web artifacts.

    Advanced topics

    Specifying multiple publish environments

    One common request we heard is to publish an app to different environments. For example, one might want to publish to a “staging” environment first, ensure that the app works properly, and only then publish to “production”.

    With the new Publish experience, switching between multiple environments is only a dropdown away. Each publish profile remembers its own URL, Client ID, and Secret, so publishing to a different profile is as easy as changing the profile dropdown and choosing the appropriate “Deploy your web project” and “Package the app” buttons.

    Figure 9. Publishing to multiple environments
    Figure 9. Publishing to multiple environments

    Configuring Client Secret (or other environmental variables) in the Azure Portal

    Sometimes, the “Client Secret” for the production app might be a closely-guarded secret. As a developer, you might have the ability to publish to the website, but you might not have access to the Client Secret itself. The same thing might be true for any other such variables.

    One way to solve this scenario is to have your Azure account Admin manage these environmental variables through the Azure Portal. For each Azure Website, it is possible to have the Client Secret – or any other variables – be set via “app settings” section of the Configure tab. The “app settings” values take precedence over values in Web.config, so you get the best of both worlds – your local F5 scenario continues to work as before, yet your published app can make of a Client Secret that you might not even have access to.

    Figure 10. Configuring a Client Secret in the Azure portal
    Figure 10. Configuring a Client Secret in the Azure portal

    Deploying outside of Azure (or to a local IIS server)

    If you need to deploy to a non-Azure hosting provider – particularly if you are publishing to an on-premises machine – you can still use the many improvements to the app-publishing experience.

    During profile-creation, choose the Create new profile radio button.

    Then, once you are ready to deploy your web content, enter the Connection credentials in the “Publish Web” wizard.

    The rest of the flow should be the same. Remember to ensure that your hosting server supports the HTTPS protocol.

    Creating a Web Deploy Package

    An alternate, but similar, case for publishing to a local IIS server is when only an IT admin has the ability to publish a website. To simplify deployment, you can provide the IT admin with a Web Deploy Package – a .zip file that contains all web artifacts.

    To do this, create a new profile rather than importing one from Azure. In the case of an app for SharePoint, you will also need to fill in some dummy Client ID and Secret values.

    Now go to Deploy your web project – but be sure to choose Web Deploy Package as the publish method in the “Connection” tab.

    Figure 11. Creating a web deploy package
    Figure 11. Creating a web deploy package

    Choose a package location (any new folder will do) and proceed with the wizard. At the end, a set of deploy scripts and a .zip file with your web content will be generated.

    Figure 12. Web deploy package files
    Figure 12. Web deploy package files

    Your IT Admin should be able to take things from here (registering the app with SharePoint and providing the Client ID and Secret into the deployment scripts). Once the web content is deployed, ask your admin to provide you with the Client ID (the secret is not needed) and proceed with the “Package the app” step. You can then send the app package – now containing the SharePoint artifacts, and pointing at the live web content – back to the IT admin to deploy to SharePoint.

    Enhanced by Zemanta

    The new App Model – Explained

    A Brief History of Personal Applications

    In its youth, Microsoft was true to its name as a supplier of microcomputer software. In those days, microcomputer operating systems (such as MS-DOS) were intended for a single user running one application at a time. There were no limits on what an application could do; it could take over any part of the system and could interact with other applications as it wished. There were no rules.

    Figure 1 – Classic Microcomputer Application Model

    0218_3rdAppModel-Figure2_png-550x0

    This model persisted even as users began to run multiple programs in window-based systems such as Microsoft Windows (from Windows 1.0 through Windows ME) and Apple’s classic Mac OS (through OS 9). Any application could easily destabilize the system, and in practice, the more applications that were installed the less stable the computer became. Applications would often interfere with one another, or inadvertently break parts of the operating system, creating endless headaches for users and the people who support them.

    Meanwhile, multi-user operating systems such as Unix or Digital’s VMS provided a level of application isolation to protect the environment from rogue or buggy applications (and, let’s face it, all applications have bugs.) This is shown in Figure 2. This model solidifies the line between the applications and the runtime environment, and forces applications to access resources using an Application Programming Interface or API.

    Figure 2 – Classic Multiprogramming Application Model

    3365_3rdAppModel-Figure3_png-550x0

    Eventually, personal operating systems moved to this model, however it was a painful and lengthy experience. Microsoft graduated to this model with Windows NT 3.1, having hired Dave Cutler, designer of Digitial’s VMS system, to design it. Apple followed suit with OS X, which is based on Bell Labs’ UNIX multiprogramming OS. Application developers complained loudly – suddenly there were rules, and the OS enforced them strictly. Any application which depended on breaking these rules had to be rewritten, and there were compatibility challenges for a while until everybody got used to the new way of doing things.

    This application isolation isn’t perfect, however. Although applications can’t take over the operating system or reach into each other’s memory, for example, they can still render the system unstable, usually by updating content or data that they shouldn’t, or by leaving behind content even after they’ve been removed. The mobile phone industry took this into account and built in even more robust app isolation into systems such as iOS, Android, and Windows Mobile. (These same principles are also present in the new Windows 8 App model, which we’ve done a lot of work with at BlueMetal Architects). We could call these modern applications, and they share certain attributes:
    •Complete isolation of applications so they can’t overstep their bounds or destabilize the system
    •Central distribution of applications so we can vet their quality (and, in some cases, boost profits by limiting distribution channels to a single “app store”)
    •Users and system administrators can effectively control what applications can do when they’re installed
    •The system can completely clean up after applications when they’re removed, rather than leaving this up the application developer, to ensure they don’t leave data or configuration changes behind

    These capabilities result in a new level of reliability and simplicity. Applications rarely interact badly or destabilize the system, and when a user wants an application to go away, it doesn’t leave a mess behind.

    Microsoft’s Productivity Suite

    Just as Microsoft took the industry by storm in the personal OS market in the 1990’s, it’s all but taken over the productivity and collaboration software market, and remains a strong leader in this area.

    Microsoft Office and SharePoint are all but ubiquitous, and so far, would-be competitors such as Google and Atlassian have had only limited success in dethroning them. However when it comes to extending Office and SharePoint, the application model has suffered the same problems as we saw in MS-DOS.

    This has made many organizations reticent to even install these applications. An application or SharePoint “solution” can pretty much do as it pleases, and developers need to carefully follow best practices in order to avoid destabilizing the system.

    And anyone who’s ever had Microsoft Outlook crash complaining about an “add-in” can attest to the fact that this problem extends to the client side of the Office product line.

    Microsoft attempted to address this by adding “sandboxed solutions” to SharePoint 2010, but there were a number of problems which limited its adoption (including severely limited functionality and scalability issues.)

    With the advent of Office and SharePoint 2013, Microsoft introduced a new app model to finally address these issues by applying the same principles used in mobile apps.

    This is a 3rd app model to go with its Windows Phone and Windows 8 apps. Adoption has been slow, and it will probably take years and multiple iterations of both applications and the platform to get there.

    App developers are already complaining, much as they did when Windows NT prohibited them from taking over the display drivers or writing to memory unimpeded.

    Office 365 is a primary driver for these apps, but enterprises who distribute Office and host SharePoint on premises can benefit from it as well.

    No longer will SharePoint administrators toss and turn at night wondering if the application they just installed on SharePoint will bring down the whole server farm. Yet it’s likely to be a long road.

    This article is about this 3rd app model and how it works at an architectural level. The sections which follow will describe three core design principles which provide the application isolation and trust of a modern application in SharePoint and Office.

    At BlueMetal Architects, we’ve been working on how to apply these principles in our client solutions, even outside of the app model. I’ll be talking about this at SPTechCon Boston and SharePoint Connections later this year, and will show attendees how to apply these principles in any version of SharePoint to get the same benefits and to make the transition to the app model easier.

    Principle #1 – Isolate Apps in the Browser

    “Good fences make good neighbors”
    – Anonymous

    The first element is that application integration no longer happens in SharePoint or Office, but in the web browser. Yes you read that right – even the Office apps run in the browser, and unbeknownst to users, programs such as Word and Excel 2013 quietly host a browser window to run these apps.

    There are a couple advantages to this:

    1.Web browsers are already good at isolation. This was forced by cross-site scripting attacks and other problems that plagued early browsers; now there are standards which work consistently across all browsers to prevent, say, a casual gaming site from reaching into your online banking site to play games with your personal finances.

    2.These applications are all potentially browser based anyway. SharePoint is primarily accessed via a web browser, and the new Office Web Applications are on the rise, allowing Word, Excel, and PowerPoint to run in the browser.

    image_thumb

    Outlook has had a browser-based version for years. So running apps in the browser allows them to work with rich client and browser based versions of Office.

    SharePoint doesn’t run the apps as it has in the past; instead it stores metadata about the apps and adds them to the web page so they can run in the browser.

    These apps are rendered in IFrames, which may point to a special, isolated SharePoint site or to any page on the web. In order to isolate the apps, they are always given URL’s on a different host name than the SharePoint one; this causes the browser to isolate the apps as it would isolate any two web sites.

    Figure 3 – SharePoint App Model

    In practice, application code runs as Javascript in the browser itself, or in some external server somewhere outside of SharePoint. SharePoint can host all-Javascript apps (these are called “SharePoint Hosted” Apps), or they can be hosted by an external server (so-called “Cloud” or “Provider” Apps).

    Regardless of whether the code is running in browser script or on some external server, it can only access SharePoint using services. API’s such as the Client Object Model are available to run inside the app, but ultimately they access SharePoint indirectly, using web services. These services control access to SharePoint, so apps can only put content where they’re supposed to.

    Principle #2: Be Careful How You Trust

    “With great power comes great responsibility.”
    – Spiderman

    blog-office365

    Traditionally, applications run with the permission of their user. For example, if you try to open a document with Microsoft Word, and you don’t have permission to the document, the operating system won’t let you open it and Word will bubble up the error message.

    This model falls over when an application wants to do something that the end user isn’t allowed to. For example, a workflow application might want to log something to an audit trail, yet it might be important not to let the user running the application to access the audit trail through other means. SharePoint solves this with something called SPSecurity.RunWithElevatedPrivileges(); when an application runs this method, it is suddenly omnipotent.

    Being omnipotent might sound like fun, but it’s a lot of responsibility. It used to be that users were pretty much omnipotent when they ran Windows applications; Windows introduced User Access Control (UAC) to address this. This feature prompts users when applications want elevated privileges, but it isn’t ideal since users often don’t have enough information to make an informed choice.

    The SharePoint app model provides much more selective access. When a user installs an app, he or she is asked what permissions the app should have (and they can’t exceed that user’s permissions). Then SharePoint gives the app its own identity and uses its existing permission system to enforce access control. Thus you can give an app permission to the audit list when you install it, and still deny end users access to the same list.

    SharePoint 2013 has added a vast library of services, which are available through a client-side API called the “Client Object Model”, or directly using RESTful services. The app’s identity is conveyed to these services by SharePoint’s cross-domain library when app code runs in the web browser, and by OAuth when the app runs in an external server. OAuth is a standard which allows an application to access services on behalf of a user without having access to the user’s password or other shared secret; it’s based on work by Flickr, Google, and Yahoo who all needed a way to allow “apps” to access user content without giving away the user’s password.

    Principle #3: Clean Up Your Mess

    “Keep your house and its surroundings pure and clean. This hygiene will keep you healthy and benefit your worldly life.”
    – Sri Sathya Sai Baba

    The gray market for Windows cleaning software is indicative of a major problem of applications not cleaning up after themselves. Even if you uninstall that “cute kitty” game that someone downloaded, there’s a good chance nobody will remember to scoop out the litter box.

    1

    Traditional SharePoint solutions are like that: they routinely leave web parts, SharePoint data, and other junk laying around. Part of it is developer sloppiness, but Microsoft doesn’t make it any too easy to seek and destroy the artifacts users have created with your app, so it’s no wonder this happens so often.

    SharePoint apps are much better in this regard. By default, apps store their content in an isolated site called an “app web” – delete the app, and SharePoint will delete the app web and everything in it. SharePoint will also police web pages and remove any leftover web parts or other references to the app.

    Q: So how do we get there?
    A: Start thinking like Office 365, even if you’re on premises

    The new app model has gotten a slow start. This is partly due to the time it takes for enterprises to adopt SharePoint 2013, partly due to limitations in the model, and partly due to the fact that you pretty much have to rewrite applications to adopt the new model.

    For example, a traditional “web part” solution would be written using C# running on the SharePoint server, accessing the SharePoint server API.

    The equivalent app might be written in Javascript running on the web browser, accessing the SharePoint client API. Unless a SharePoint solution is developed with the app model in mind, it’s not going to be an easy transition.

    SharePoint2013

     

    If an entire app is written to run in the web browser, it can easily be hosted on premises or Office 365, but browser script only goes so far. If the app needs to run on an external server, someone has to host it.

    On premises, this is left as an exercise to the IT department; there are no special tools for this. ISV’s and service providers need to provide their own hosting.

    Office 365 provides an option for “autohosted” apps which are automatically provisioned in an Azure environment; this seems like a great idea, but at this writing, “autohosted” apps aren’t fully enabled and can’t be sold in the Office Store.

    In addition, since they can only talk to SharePoint using web services (either directly or via a client API), there are many things they simply can’t do. This rules out a whole group of SharePoint solutions that do administrative chores or reach under the covers to change the way SharePoint data is stored. In general, the app model is very much a “version 1” implementation.

    That said, it has great promise. It’s like going from Windows 95 to Windows 8… it’s not going to happen overnight, but it’s worth it. Microsoft might want to turn off traditional solution capabilities, and has already hinted at that by “deprecating” sandboxed solutions.

    (Sandboxed solutions still work for now. The user code service, a part of sandboxed solutions, is the part that’s likely to be phased out. Other sandboxed solution features, such as the ability to deploy content to SharePoint using the Solutions Gallery is key to supporting site templates and even the new Design Manager, and thus are unlikely to be removed from the product.)

    So what’s an IT shop to do?

    The answer is to start thinking as if you’re Office 365. Maybe you’re already on Office 365, in which case this is easy!

    But even if you’re on premises, start to think as if you might be going to Office 365 soon. Select 3rd party applications that run on Office 365, and start writing your own solutions to run outside of SharePoint, ideally in Javascript in the browser.

    SharePoint 2010 provided enough services – especially when supplemented with client libraries such as SPServices – that you can start developing code that is much closer to the app model without waiting for SharePoint 2013.

    The reason for this isn’t so much that you’ll have to go to Office 365; there will probably always be plenty of options for hosting SharePoint, including on premises.

    The reason is because over time you should strive to make your on-premises hosting more like Office 365 for the same reason Microsoft hosts SharePoint this way:

    •The application isolation and clean-up make SharePoint more stable and easier to manage

    •Application distribution is controlled centrally, and users can install apps without IT assistance

    •Isolated apps make it easier to upgrade SharePoint; you don’t have to worry as much about how apps interact with the platform if they’re only accessing web services and not actually running on the server

    •The more granular permission model is more secure; there are limits on what a rogue app could do, and apps are never omnipotent

    Following these practices, you can start preparing to adopt the new app model when you’re ready to, regardless of your hosting environment.

    By moving towards the new model, you’ll receive the benefits of stability and manageability gradually over time, and will open up more flexible hosting options for the future.

    SharePoint Samurai