Category Archives: Office 365

How To : SAP Integration with .Net 4.0 (SAP Connection Manager) & SharePoint

This is a simple, C# class library project to connect .NET applications with SAP.

ppt_img[1]

 

This component internally implements SAP .NET Connector 3.0. The SAP .NET Connector is a development environment that enables communication between the Microsoft .NET platform and SAP systems.

This connector supports RFCs and Web services, and allows you to write different applications such as Web form, Windows form, or console applications in the Microsoft Visual Studio .NET.

With the SAP .NET Connector, you can use all common programming languages, such as Visual Basic. NET, C#, or Managed C++.

Features
Using the SAP .NET Connector you can:

Write .NET Windows and Web form applications that have access to SAP business objects (BAPIs).

Develop client applications for the SAP Server.

Write RFC server applications that run in a .NET environment and can be installed starting from the SAP system.

Following are the steps to configure this utility on your project

Download and extract the attached file and place it on your machine. This package contains 3 libraries:

SAPConnectionManager.dll
sapnco.dll
sapnco_utils.dll

Now go to your project and add the reference of all these four libraries. Sapnco.dll and sapnco_utils.dll are inbuilt libraries used by SAP .NET Connector. SAPConnectionManager.dll is the main component which provides the connection between .NET and SAP.

Once the above steps are complete, you need to make certain entries related to SAP server on your configuration file. Here are the sample entries that you have to maintain on your own project. You need to change only the values which are marked in Bold. Rest remains unchanged.

<appSettings>
<add key=”ServerHost” value=”127.0.0.1″/>
<add key=”SystemNumber” value=”00″/>
<add key=”User” value=”sample”/>
<add key=”Password” value=”pass”/>
<add key=”Client” value=”50″/>
<add key=”Language” value=”EN”/>
<add key=”PoolSize” value=”5″/>
<add key=”PeakConnectionsLimit” value=”10″/>
<add key=”IdleTimeout” value=”600″/>
</appSettings>

To test this component, create one windows application. Add the reference of sapnco.dll, sapnco_utils.dll, andSAPConnectionManager.dll on your project.

Paste the below code on your Form lode event

SAPSystemConnect sapCfg = new SAPSystemConnect();
RfcDestinationManager.RegisterDestinationConfiguration(sapCfg);
RfcDestination rfcDest = null;
rfcDest = RfcDestinationManager.GetDestination(“Dev”);

sap_integration_en_round[1]
That’s it. Now you are successfully connected with your SAP Server. Next you need to call SAP business objects (BAPIs) and extract the data and stored it in DataSet or list.

Demo Code available on request!!

How To : Add a Promoted Links Web Part to SharePoint 2013 App Default page

This article helps you to add Promoted links web part to your default app page as the following figure:

 

To do this follow the following steps:
Open the shortcut menu for the project, and then choose Add, New Item
Add Picture Textbox, and two buttons to infopath form

 

In the Templates pane, choose the List template, and then choose the Add button :

Enter list name and choose the Create a non-customizable list based on an existing list type of option button, and then, in its list, choose Promoted links, and then choose the Finish button

Binding the CAPTCHA image
In Solution Explorer, under the list instance node, open the Elements.xml file.
Add the promoted links items as the following:
<?versionencodingutf-8?>
Elementsxmlnshttp://schemas.microsoft.com/sharepoint/
ListInstanceTitleMyPromotedLinks
OnQuickLaunch
TemplateType
FeatureId192efa95-e50c-475e-87ab-361cede5dd7f
Lists/MyPromotedLinks
DescriptionMy List Instance
FieldTitleTwitter</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/twitter.png
FieldDescriptionMuawiyah Shannak Twitter
FieldLinkLocationhttps://twitter.com/MuShannak</Field
FieldOrder</Field
</
FieldTitle</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/blogger.png
FieldDescriptionMuawiyah Shannak Blog
FieldLinkLocationhttp://mushannak.blogspot.com</Field
FieldOrder</Field
</
FieldTitleLinkedin</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/linkedin.png
FieldDescriptionMuawiyah Shannak Linkedin
FieldLinkLocationhttp://ae.linkedin.com/in/shannak</Field
FieldOrder</Field
</
</
</
<!–ListInstance
</Elements
In Solution Explorer, under the Pages node, open the Default.aspx file. Add following tags inside the PlaceHolderMain Place Holder:
WebPartPagesWebPartZone=”WebPartZone”runat=”server”FrameType=”None”>
WebPartPagesXsltListViewWebPart=”XsltListViewAppPromotedList”
runat=”server”ListUrl=”Lists/MyPromotedLinks”IsIncluded=”True”
NoDefaultStyle=”TRUE”Title=”Images used in switcher”
PageType=”PAGE_NORMALVIEW”Default=”False”
ViewContentTypeId=”0x”>
</WebPartPagesXsltListViewWebPart
</WebPartPagesWebPartZone

Deploy a solution and you will find nice promoted links web part in the app default page!

A Look At : The New Search Functionality in SharePoint Online and how Developers can make use of it

SharePointOnline2L-1[2]hero-for-hire_basic-layout_600http://en.gravatar.com/sharepointsamurai/

 

Search functionality in SharePoint 2013 includes several enhancements, custom content processing and a new framework for presenting search result types. SharePoint Server 2013 presents a new search architecture that includes substantial changes and additions to the search components and databases.

Also, there have been significant enhancements made to the Keyword Query Language (KQL).

Some of the features and functionalities have been depreciated from the previous version of SharePoint 2013. There has been a more search user interface improvement which brings the user more interactive with search results. For example, users can rest the pointer over a search result to see the content preview in the hover panel to the right of the result.

Now you can see Office 365 SharePoint 2013 and its admin features of Search Service Application. It’s a breakthrough advancing; nearly all the new features listed here are missed in Office 365 – SharePoint 2010. The following screen capture shows the SharePoint central administrator view for the Search section.

Manage all aspects of the Search experience for your end users improving the relevancy of your results per your content and metadata.

Search helps users quickly return to important sites and documents by remembering what they have previously searched and clicked. The results of previously searched and clicked items are displayed as query suggestions at the top of the results page.

In addition to the default manner in which search results are differentiated, site collection administrators and site owners can create and use result types to customize how results are displayed for important documents. A result type is a rule that identifies a type of result and a way to display it.

 

Manage Search Schema

Managed properties are used to restrict search results, and present the content of the properties in search results. Crawled properties are automatically extracted from crawled content. All the changes to properties will take effect only after the next full crawl.

Under the search schema section, administrator can:

  • View, create, or modify Managed Properties and map crawled properties to managed properties
  • View or modify Crawled Properties, or to view crawled properties in a particular category
  • View or modify Categories, or view crawled properties in a particular category.

While creating a new managed property, the ‘Mappings to crawled properties’ is one of the key attributes for the configuration set in our new property.

 

 

Manage Search Dictionaries

  Taxonomy Term Store  
People Search Dictionaries System
Department Company Exclusions Hashtags
Job Title Company Inclusions Keywords
Location Query Spelling Exclusions Orphaned terms
  Query Spelling Includings  

 

Manage Authoritative Pages

Search in SharePoint 2013 will analyze the collection of authoritative and non-authoritative pages to determine the ranking of search results. The authoritative sites are of two kinds:

  • Authoritative Site Pages
  • Non-authoritative Site Pages

Authoritative site pages are the links, which administrator authorized to be the most relevant information. There can be multiple authoritative pages in each environment. There is an option for specifying second and third-level authorities for search ranking. Non-authoritative site pages are the content from certain sites can be ranked lower than the rest of the content in the site.

 

Query Suggestion Settings

SharePoint Search comprises various features that you can leverage for building productivity solutions. One of the interesting and useful competencies are Query Suggestions. The query suggestions are administrated by two options as follows:

  • Always Suggest Phrases
  • Never Suggest Phrases

Manage Result Sources

Result Sources are used to frame the search results and confederate queries to external sources, such as internet search engines, etc. Once the result source are defined, we can configure search web parts and query rule actions to use the result source.

How the Result Source is managed? A SharePoint Online administrator of SharePoint Online Tenant can manage result sources for all site collections and sites reside under the same tenant. A site collection administrator or a site owner can manage result sources for a site collection or a site, respectively.

SharePoint 2013 provides 16 pre-defined result sources. The pre-configured default result source is Local SharePoint Results. We can state a different result source as the default as per our requirement

.

While creating a new Result Source, there is Protocol and Query transform are the two important parameters which tells the Result Source what to do in the SharePoint.

Protocol – Local SharePoint for results from the index of this Search Service. OpenSearch 1.0/1.1 for results from a search engine that uses that protocol. Exchange for results from an exchange source. Remote SharePoint for results from the index of a search service hosted in another farm.

Query Transform – Change incoming queries to use this new query text instead. Include the incoming query in the new text by using the query variable “{searchTerms}“.

Use this to scope results. For example, to only return OneNote items, set the new text to “{searchTerms} fileextension=one“. Then, an incoming query “sharepoint” becomes “sharepoint fileextension=one“. Launch the Query Builder for additional options.

 

Manage Query Rules

Query rules are to conditionally stimulate the search results and show hunks of supplementary results based on the rules created in the SharePoint. In a query rule, you can specify conditions and correlated actions without any help of code. The user with Site Collection, Site owner permission level can create and manage the query rules.

 

Manage Query Client Types

Query Client Types are one of the new search features in SharePoint 2013. Client Type identifies an application where a search query is sent from. Applications are prioritized by tiers. Top tier has the highest priority. When resource limit is reached, query throttling becomes ON, and search system will process the queries from top tier to bottom tier.

System Client Types are available out-of-the box, and cannot be deleted. We can add a new custom Client Type by clicking on New Client Type.

 

Remove Search Results

To remove data from the search results, type the URLs which needed to remove from it. All the URLs listed in the textbox will be removed from search results immediately, once after the Remove Now button is clicked.

View Usage Reports

Here the administrator will be able to see the usage reports and search related report, example Query Rules usage by day, Top Queries by Day, etc.

Search Center Settings

In this setting, the default search system will be mapped. Usually the Enterprise Search Center site that has been created for search entire SharePoint sites in the organization.

Export Search Configuration

Create a file that includes all customized query rules, result sources, result types, ranking models and site search settings but not any that shipped with SharePoint, in the current tenant that can be imported to other tenants.

Import Search Configuration

If you have a search configuration you’d like to import, browse for it below. Settings imported from the file will be created and activated as part of the site. You can modify any of the settings after import.

Crawl Log Permissions

Grant users read access to crawl log information for this tenant.

Search Client Object Model

SharePoint 2013 Search includes a client object model (CSOM) that enables access to most of the Query object model functionality for online, on-premises, and mobile development. You can use the Search CSOM to create client applications that run on a machine that does not have SharePoint 2013 installed to return SharePoint 2013 Preview search results.

The Search CSOM includes a Microsoft .NET Framework managed client object model and JavaScript object model, and it is built on SharePoint 2013. First, client code accesses the SharePoint CSOM. Then, client code accesses the Search CSOM.

NOTE: Custom search solutions in SharePoint Server 2013 do not support SQL syntax. Search in SharePoint 2013 supports FQL syntax and KQL syntax for custom search solutions.

We can configure crawled and managed properties. Configure Result Sources which were Federated Result / Scopes in SharePoint Search 2010.

 

Introduction to Business Connectivity Services (BCS)

BCS has the ability to connect and query the data sources and returns the results to the user through an external list, or app for SharePoint, or Office 2013. The Microsoft Office 2013 and SharePoint 2013 include Microsoft Business Connectivity Services (BCS).

The SharePoint 2013 and the Office 2013 suites include Microsoft Business Connectivity Services. With Business Connectivity Services, you can use SharePoint 2013 and Office 2013 clients as an interface into data that doesn’t live in SharePoint 2013 itself. It does this by making a connection to the data source, running a query, and returning the results.

Business Connectivity Services returns the results to the user through an external list, or app for SharePoint, or Office 2013 where you can perform different operations against them, such as Create, Read, Update, Delete, and Query (CRUDQ). Business Connectivity Services can access external data sources through Open Data (OData), Windows Communication Foundation (WCF) endpoints, web services, cloud-based services, and .NET assemblies, or through custom connectors.

Business Connectivity Services can access external data sources through Open Data (OData), Windows Communication Foundation (WCF) endpoints, web services, cloud-based services, and .NET assemblies, or through custom connectors. The Open Data Protocol is known as OData. It is an open web protocol for querying and updating data.

Business Connectivity Services uses SharePoint 2013 and Office 2013 as a client interface for data which doesn’t reside SharePoint 2013 environment.

The following screen capture is the BCS features and configuration options available under the SharePoint Administration Center in the Office 365.

How To : Use the CSOM to Update SharePoint Web Part Properties

List in SharePoint9

I wanted to share two methods I developed for retrieving and updating web part properties from JavaScript using CSOM in SharePoint 2013 (I haven’t seen a reference for getting a page’s web part manager through REST).

The web part ID should be available through the “webpartid” attribute included in the page markup.

The methods use the jQuery deferred object, but that could easily be replaced with anything else to handle the asynchronous events. Using this I’m hoping to create configurable client side web parts which is a problem I’ve recently had to tackle.

View on GitHub

app.js

  1. //pass in the web part ID as a string (guid)
  2. function getWebPartProperties(wpId) {
  3. var dfd = $.Deferred();
  4.  
  5. //get the client context
  6. var clientContext =
  7. new SP.ClientContext(_spPageContextInfo.webServerRelativeUrl);
  8. //get the current page as a file
  9. var oFile = clientContext.get_web()
  10. .getFileByServerRelativeUrl(_spPageContextInfo.serverRequestPath);
  11. //get the limited web part manager for the page
  12. var limitedWebPartManager =
  13. oFile.getLimitedWebPartManager(SP.WebParts.PersonalizationScope.shared);
  14. //get the web parts on the current page
  15. var collWebPart = limitedWebPartManager.get_webParts();
  16.  
  17. //request the web part collection and load it from the server
  18. clientContext.load(collWebPart);
  19. clientContext.executeQueryAsync(Function.createDelegate(this, function () {
  20. var webPartDef = null;
  21. //find the web part on the page by comparing ID’s
  22. for (var x = 0; x < collWebPart.get_count() && !webPartDef; x++) {
  23. var temp = collWebPart.get_item(x);
  24. if (temp.get_id().toString() === wpId) {
  25. webPartDef = temp;
  26. }
  27. }
  28. //if the web part was not found
  29. if (!webPartDef) {
  30. dfd.reject(“Web Part: “ + wpId + ” not found on page: “
  31. + _spPageContextInfo.webServerRelativeUrl);
  32. return;
  33. }
  34.  
  35. //get the web part properties and load them from the server
  36. var webPartProperties = webPartDef.get_webPart().get_properties();
  37. clientContext.load(webPartProperties);
  38. clientContext.executeQueryAsync(Function.createDelegate(this, function () {
  39. dfd.resolve(webPartProperties, webPartDef, clientContext);
  40. }), Function.createDelegate(this, function () {
  41. dfd.reject(“Failed to load web part properties”);
  42. }));
  43. }), Function.createDelegate(this, function () {
  44. dfd.reject(“Failed to load web part collection”);
  45. }));
  46.  
  47. return dfd.promise();
  48. }
  49.  
  50. //pass in the web part ID and a JSON object with the properties to update
  51. function saveWebPartProperties(wpId, obj) {
  52. var dfd = $.Deferred();
  53.  
  54. getWebPartProperties(wpId).done(
  55. function (webPartProperties, webPartDef, clientContext) {
  56. //set web part properties
  57. for (var key in obj) {
  58. webPartProperties.set_item(key, obj[key]);
  59. }
  60. //save web part changes
  61. webPartDef.saveWebPartChanges();
  62. //execute update on the server
  63. clientContext.executeQueryAsync(Function.createDelegate(this, function () {
  64. dfd.resolve();
  65. }), Function.createDelegate(this, function () {
  66. dfd.reject(“Failed to save web part properties”);
  67. }));
  68. }).fail(function (err) { dfd.reject(err); });
  69.  
  70. return dfd.promise();
  71. }

HTML5 SharePoint Pic Web Part Released and Available !!

This is a Sandbox web part control to display a matrix of image thumbnails.

For a build a Metro IDE or a Picture Gallery to show products, news, or a social team that integrates with pictures, etc. All this, from any SharePoint picture library.

Supports : SharePoint 2010 & 2013 On-Premise Web Part,  SharePoint Online Web Part

FEATURES OF THE WEB PART** ver. 1.0

     

**PREVIEW EXAMPLE OF THE CONTROL**





 
1

How to: Customize the SharePoint HTML Editor Field Control using ECM

You can use the HTML Editor field control to insert HTML content into a publishing page. Page templates that include a Publishing HTML column type also include the HTML Editor field control.

This editor has special capabilities, such as customized styles, editing constraints, reusable content support, a spelling checker, and use of asset pickers to select documents and images to insert into a page’s content. This topic describes how to modify some features and attributes of the HTML Editor field control.

Image

If the content type of a page layout supports the Page Content column, you can add a Rich HTML field control to your page layout by using markup such as the following.

<PublishingWebControls:RichHtmlField id="ArticleAbstract" FieldName="ArticleAbstract" 
          AllowExternalUrls="false" 
          AllowFonts="true" 
          AllowReusableContent="false" 
          AllowHeadings="false"
          AllowHyperlinks="false"
          AllowImages="false"
          AllowLists="false"
          AllowTables="false"
          AllowTextMarkup="false" 
          AllowHTMLSourceEditing="false"
          DisalbeBasicFormattingButtons="false"
          runat="server"/>

In the example above, RichHTMLField is the name of the field control that provides the richer HTML editing experience. Attributes such as AllowFonts and AllowTables specify restrictions on the field.

The HTML field control allows font tags, but the control does not allow URLs that are external to the current site collection, reusable content stored in a centralized list, standard HTML heading tags, hyperlinks, images, numbered or bulleted lists, tables, or text markup.

Table 1. HTML editor field control properties
Attribute Description
AllowExternalUrls Only URLs internal to the current site collection are allowed to be referenced in a link or an image.
AllowFonts Content may contain Font tags.
AllowHtmlSourceEditing HTML Editor can be switched into a mode that allows the HTML to be edited directly.
AllowReusableContent Content may contain reusable content fragments stored in a centralized list.
AllowHeadings Content may contain HTML heading tags (H1, H2, and so on).
AllowTextMarkup Content may contain bold, italic, and underlined text.
AllowImages Content may contain images.
AllowLists Content may contain numbered or bulleted lists.
AllowTables Content may contain table-related tags such as <table>, <tr>, and <td>.
AllowHyperlinks Content may contain links to other URLs.
AllowHtmlSourceEditing When set to false, the HTML editor is disabled from switching to HTML source editing mode.
AllowHyperlinks Gets or sets the constraint that allows hyperlinks to be added to the HTML. If this flag is set to false, <A>, <AREA>, and <MAP> tags are removed from the HTML. Default is true. This property also determines whether the editing user interface (UI) enables these operations.
AllowImageFormatting Gets or sets image formatting items. This restriction disables only menus and does not force the content to adhere to this restriction
AllowImagePositioning Gets or sets the position of the image. This restriction disables only menus and does not force the content to adhere to this restriction.
AllowImageStyles Gets or sets whether the Table Styles menu is enabled. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowInsert Gets or sets whether Insert options are shown. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowLists Gets or sets the constraint that allows list tags to be added to the HTML. If this flag is set to false, <LI>, <OL>, <UL>, <DD>, <DL>, <DT>, and <MENU> tags are removed from the HTML. Default is true. This also determines whether the editing UI enables these operations.
AllowParagraphFormatting Gets or sets whether paragraph formatting items are enabled. This restriction disables only menus and does not force the content to adhere to this restriction.
AllowStandardFonts Gets or sets whether standard fonts are enabled. This restriction disables only menus and does not force the content to adhere to this restriction.
AllowStyles Gets or sets whether the Style menu is enabled. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowTables Gets or sets the constraint to allow tables to be added when editing this field.
AllowTableStyles Gets or sets whether the Table Styles menu is enabled. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowTextMarkup Get or set the constraint to allow text markup to be added when editing this field.
AllowThemeFonts Gets or sets whether theme fonts are enabled. This restriction disables only menus and does not force the content to adhere to this restriction.
Predefined Table Formats

The HTML editor includes a set of predefined table formats, but it can be customized to fit the styling of an individual page. Each table format is a collection of cascading style sheet (CSS) classes for each table tag. You can define styling for the first and last row, odd and even rows, first and last column, and so on.

The HTML Editor dynamically applies certain styles from the referenced style sheets on the page and makes them available to users when formatting a table. For a custom style to be available when formatting a table, the relevant class names must follow the PREFIXTableXXX-NNN format, where:

  • PREFIX is ms-rte by default, but you can override the default by using the control PrefixStyleSheet() property of the RichHTML field control.
  • XXX is the specific table section, such as EvenRow or OddRow.
  • NNN is the name to identify the table styling.

The following example presents a complete set of classes for a table styling format.

.ms-rteTable-1 {border-collapse:collapse;border-top:gray 1.5pt;
    border-left:gray 1.5pt;border-bottom:gray 1.5pt;
    border-right:gray 1.5pt;border-style:solid;}
.ms-rteTableHeaderRow-1 {color:Green;background:yellow;text-align:left}
.ms-rteTableHeaderFirstCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableHeaderLastCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableHeaderOddCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableHeaderEvenCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableOddRow-1 {color:black;background:#FFFFDD;}
.ms-rteTableEvenRow-1 {color:black;background:#FFB4B4;}
.ms-rteTableFirstCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableLastCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableOddCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableEvenCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableFooterRow-1 {color:blue;font-style:bold;
    font-weight:bold;background:white;border-top:solid gray 1.0pt;
    border-bottom:solid gray 1.0pt;border-right:solid silver 1.0pt; 
    border-style:solid;}
.ms-rteTableFooterFirstCol-1 {padding:0in 5.4pt 0in 5.4pt;
    border-top:solid gray 1.0pt;text-align:left}
.ms-rteTableFooterLastCol-1 {padding:0in 5.4pt 0in 5.4pt;
    border-top:solid gray 1.0pt;text-align:left}
.ms-rteTableFooterOddCol-1 {padding:0in 5.4pt 0in 5.4pt;
    text-align:left;border-top:solid gray 1.0pt;}
.ms-rteTableFooterEvenCol-1 {padding:0in 5.4pt 0in 5.4pt;
    text-align:left;border-top:solid gray 1.0pt;}

Microsoft SharePoint Server 2010 includes a set of default table styles. However, if the system detects new styles that did not originate in the default .css file, it removes the default set and presents only those newly defined styles in the HTML editor dialog box.

Spelling Checker

In SharePoint Server 2010, the HTML editor includes a spelling checker, which can be customized by developers by using the SpellCheckV4Action Web control and the SpellCheckToolbarButton Web control. The spelling checker action registers client files and data during a spelling check.

It also includes a method to get the console tab and calls the user rights to verify that the current user has rights to perform a spelling check operation on the selected item. The spelling checker action calls the appropriate ECMAScript (JavaScript, JScript) code, and sends information to the client about available spellings and the default language to use for the request.

OneDrive and Yammer takes Social Collaboration to a new level on SharePoint Online

Yammer brings conversations to your OneDrive and SharePoint Online files

Christophe Fiessinger is a group product manager on the enterprise social team.

5810.image-1[1]

At SharePoint Conference 2014 we announced new enterprise social experiences across Office 365 helping businesses work more like networks by leveraging the power of the cloud to bring people together, gain quicker access to relevant insights and help make smarter decisions, faster.

Today we’re announcing the release of one of those features–document conversations–which essentially embeds the social collaboration capabilities of Yammer into the Office apps you use to get work done every day. Get ready for a new, simple way to collaborate on the content you produce with Office Online and store in the cloud in SharePoint Online or OneDrive for Business.

 

Document conversations enable people to share their ideas and expertise around Office documents, images and videos right from within the content they are editing or reviewing. Imagine being able to ask questions, find expertise and offer feedback about content without having to leave the application you’re working in!

 

Because it’s Yammer you can also view and participate in conversations outside your document, on your mobile device, in Microsoft Dynamics CRM or any app where a Yammer feed is embedded! Get ready for a totally new way to produce incredible content!

Here’s how document conversations work. When you open a file in your browser from your cloud store, you see the file on the left with a contextual Yammer conversation in a pane on the right. You can collapse and expand the Yammer pane as needed.

You can do more than join in a conversation from the Yammer pane. You can also post a message, @mention your coworkers, and publish to a Yammer group—either public or private.

Document conversations are easy to join in Yammer as well. If you’re working in Yammer, you’ll see a threaded conversation in the group the post was published in with an icon that enables you to open the file from the cloud location where it lives. The Yammer conversations about files are visible to users in the group but only users who have permission to view or edit the file can open it.

Document conversations are progressively being rolled out to our customers during the course of this summer where it will then be available across all sites within a tenant. To leverage document conversations, you will need to enable Yammer as your default social network.

 

For additional information, see this post: Make Yammer your default social network in Office 365.Get started today by storing your files in the cloud on OneDrive for Business or SharePoint Online, and harness social collaboration across your company with Yammer. In the coming weeks Document conversations will then be activated in your organization and ready to use! Because we continue to innovate and integrate, subscribe to this blog to get the latest updates across Office 365. And don’t hesitate to reach out to us on Twitter or Facebook with your questions or suggestions.

Christophe Fiessinger @cfiessinger

Frequently asked questions

Q: What file types can be used for document conversations?

A: Document Conversations supports over 30 common file types, including .doc, .xls, .ppt, .pdf, .png, .gif, .mp4, .avi, and more.

Q: Can I see conversations in Office desktop or when I send a document as a mail attachment?

A: No. Currently Yammer threads are visible only in files stored in a SharePoint Online document library or OneDrive for Business in Office 365.

Q: What happens if a file is renamed?

A: Document Conversations uses Yammer’s Open Graph protocol, so when a post is published it also contains a link to the file. This link serves as the glue between the file and its associated conversations. Because the link changes according to the file name, when a file is renamed, the link changes, causing the Yammer conversations to become disassociated from the new file name.

Q: Can I start conversations in a Yammer external network?

A: No. We set our initial goal to build Document Conversations to help teams work better internally. While the new document conversations cannot be started in Yammer external networks today, we are exploring ways to extend collaboration around content to beyond your firewall.

How To : Use a Site mailbox to collaborate with your team

Share documents with others

Image

Every team has documents of some kind that need to be stored somewhere, and usually need to be shared with others. If you store your team’s documents on your SharePoint site, you can easily leverage the Site Mailbox app to share those documents with those who have site access.

 Important    When users view a site mailbox in Outlook, they will see a list of all the documents in that site’s document libraries. Site mailboxes present the same list of documents to all users, so some users may see documents they do not have access to open.

If you’re using Exchange, your documents will also appear in a folder in Outlook, making it even easier to forward documents to others.

Forwarding a document from the site mailbox

Organizations, and teams within organizations, often have several different email threads going in all directions at one time. It’s easy for lines to cross, information to get lost or overlooked, and for communication to break down. Site mailboxes enable you to store team or project-related email in one place, so that everyone on the team can see all communication.

On the Quick Launch, click Mailbox.

Mailbox on the Quick Launch

The site mailbox opens as a second, separate inbox and folder structure, next to your personal email account. Mail sent to and from the site mailbox account will be shared between all those who have Contributor permissions on the SharePoint site.

 Tip    Did you know you can also use a site mailbox to collaborate on documents?

Add a site mailbox as a mail recipient

By including the site mailbox on an important email thread, you ensure that a copy of the information in that thread is stored in a location that can be accessed by anyone on the team.

Simply add the site mailbox in the To, CC, or BCC line of an email message.

Email message with site mailbox included in CC field.

You could even consider adding the site mailbox email address to any team contact groups or distribution lists. That way, relevant email automatically gets stored in the team’s site mailbox.

Send email from the site mailbox

When you write and send email from the site mailbox, it will look as though it came from you.

Because everyone with Contributor permissions on a site can access the site mailbox, several people can work together to draft an email message.

To compose a message, simply click New Mail.

New mail button for site mailboxes.

This will open a new message in your site mailbox.

New mail message in a site mailbox.

A Look At : Federated Authentication

More and more organisations are looking to collaborate with partners and customers in their ecosystem to help them achieve mutual goals. SharePoint is a great tool for enabling this collaboration but many organisations are reluctant to create and maintain identities for users from other organisations just to allow access to their own SharePoint farm. It’s hardly surprising; identity management is complex and expensive.

You have to pay for servers to host your identity provider (Microsoft Active Directory if you are using Windows); you have to keep it secure; you have to back it up and ensure that it is always available, and you have to pay for someone to maintain and administer it. Identity management becomes even more complicated when your organisation wants to give external users access to SharePoint; you have to ensure that they can only access SharePoint and can’t gain access to other systems; you have to buy additional client access licenses (CALs) for each external user because by adding them to your Active Directory you are making them an internal user.

 

Imageare

Microsoft, Google and others all offer identity providers (also known as IdPs or claims providers) that are free to use, and by federating with a third party IdP you shift the ownership and management of identities on to them. You may even find that the partner or customer you are looking to collaborate with may offer their own IdP (most likely Active Directory Federation Services if they themselves run Windows). Of course, you have to trust whichever IdP you choose; they will be responsible for authenticating the user instead of you so you must be confident that they will do a good job. You must also check what pieces of information about a user (also known as claims; for example, name, email address etc) IdPs offer to ensure they can tell you enough about a user for your purposes as they don’t all offer the same.

Having introduced support for federated authentication in SharePoint 2010, Microsoft paved the way for us to federate with third party IdPs within SharePoint itself. Unfortunately, configuring SharePoint to do this is fiddly and there is no user interface for doing so (a task made more onerous if you want to federate with multiple IdPs or tweak the configuration at a later date). Fortunately Microsoft has also introduced Azure Access Control Services (ACS) which makes the process of federating with one or more IdPs simple and easy to maintain. ACS is a cloud-based service that enables you to manage the IdPs used by your applications. The following diagram illustrates, at a high-level, the components of ACS.

An ACS namespace is a container for mappings between IdPs and one or more relying parties (the applications that want to use ACS), in our case SharePoint. Associated with each mapping is a rule group with defines how the relying party handles the individual claims associated with an identity. Using rule groups you can choose to hide or expose certain claims to specific relying parties within the namespace.

So by creating an ACS namespace you are in effect creating your own unique IdP that encapsulates the configuration for federating with one or more additional IdPs. A key point to remember is that your ACS namespace can be used by other applications (relying parties) that want to share the same identities, not just SharePoint. 

Once your ACS namespace has been created you need to configure SharePoint to trust it, which most of the time will be a one off task and from that point on you can manage and maintain the IdPs you support from within ACS. The following diagram illustrates, at a high-level, the typical architecture for integrating SharePoint and ACS.

 

In the scenario above the SharePoint web application is using two different claims providers (they are referred to as claims providers in SharePoint rather than IdPs). One is for internal users and trusts an internal AD domain and another is for external users and trusts an ACS namespace.

When a user tries to access a site within the web application they will get the default SharePoint Sign In page asking them which provider they want to use.

This page can be customised and branded as required. If the user selects Windows Authentication they will get the standard authentication dialog. If they select Azure Provider (or whatever you happen to have called your claims provider) they will be redirected to your ACS Sign In page.

Again this page can be customised and branded as required. By clicking on one of the IdPs the user will be redirected to the appropriate Sign In page. Once they have been successfully authenticated by the IdP they will be redirected back to SharePoint.

 

Conclusion

By integrating SharePoint with ACS you can simplify the process of giving external users access to SharePoint. It could also save you money in licence fees and administration costs[i].

An important point to bear in mind when planning federated authentication for SharePoint is that in order for Search to be able to index content within SharePoint, you must enable Windows authentication on at least one zone within your web application. Also, if you use a reverse proxy to perform authentication, such as Microsoft Threat Management Gateway, before allowing traffic to hit your SharePoint servers, you will need to disable the authentication checks

 

[i] The licensing model for external users differs between SharePoint 2010 and SharePoint 2013. With SharePoint 2010 if you expose your farm to external users, either anonymously or not, you have to purchase a separate licence for each server. The license covers you for any number of external users and you do not need to by a CAL for each user. With SharePoint 2013, Microsoft did away with the server license for external users and you still don’t need to buy CALs for the external users.

New Office 365 API VS.Net Add-In exposes Javascript Client model

You can now access the Office 365 APIs using libraries available for .NET and JavaScript. These libraries make it easier to interact with the REST APIs from the device or platform of your choice.

 

Office365

The libraries are included in the latest update for Office 365 API Tools for Visual Studio Preview. Along with the libraries, this release also brings you some key updates to the tooling experience, making it easier to interact with Office 365 services.

Client libraries

Office 365 provides REST-based APIs that enable developers to access Office resources such as calendar, contacts, mail, files, and more.

The client libraries will let you:

  • Perform authentication and discovery
  • Use the Mail, Calendar and Contacts API
  • Use the My Files and Sites API (currently .NET only, with JavaScript coming soon)
  • Use the Users and Groups API

 

You can program directly against the REST APIs to interact with Office 365, but it requires you to write and maintain code around managing authentication tokens, constructing the right urls and queries for the API you wanted to access, and perform other tasks.

By using client libraries to access the Office 365 APIs, you can reduce the complexity of the code you need to write to access the APIs. We’re providing these libraries for .NET as well as JavaScript developers for use with the just-announced multi-device hybrid applications.

Here are some examples of how easy it is access the Office 365 APIs using these libraries.

.NET C# code to authenticate and get upcoming events from your Office 365 calendar:

// Shows UI to authenticate
Authenticator = newAuthenticator();
AuthenticationInfo result = await authenticator.AuthenticateAsync("https://outlook.office365.com");

The AuthenticateAsync method will prompt for a username and password and authenticate against the specified resource url, like outlook.office365.com in this case. Once you have the authentication information, you can create a client object that serves as the base for accessing all the APIs for Exchange:


// Create a client object
ExchangeClient client =
newExchangeClient(newUri("https://outlook.office365.com/ews/odata"),
result.GetAccessToken);

Because we’re using .NET here, we get to take advantage of the native language capabilities, like LINQ, so querying the Office 365 calendar is as simple as writing a LINQ query and executing it:

// Obtain calendar event data
var eventsResults = await (from i in client.Me.Events
where i.End >= DateTimeOffset.UtcNow
select i).Take(10).ExecuteAsync();

With just those four lines of code you can start making calls to the Office 365 APIs!

We wanted to make sure that you can reach multiple device and service platforms with a consistent API, so the client libraries are portable .NET libraries, which means they also work with Android and iOS devices through Xamarin. Because authentication needs to display a UI that is different on the various platforms, we also provide platform-specific authentication libraries, which can then be used with the portable ones to provide an end-to-end experience.

For developers creating multi-device hybrid applications that target multiple device platforms through JavaScript, we also have JavaScript versions of these libraries that provide a similar experience while adopting JavaScript’s patterns and practices, such as using the promises pattern instead of await.

 

Here is the same example to authenticate and get calendar events in JavaScript:

var authContext = new O365Auth.Context();
authContext.getIdToken('https://outlook.office365.com/')
.then((function (token) {
// authentication succeeded
var client = new Exchange.Client('https://outlook.office365.com/ews/odata',
token.getAccessTokenFn('https://outlook.office365.com'));
client.me.calendar.events.getEvents().fetch()
.then(function (events) {
// get currentPage of calendar events
var myevents = events.currentPage;
}, function (reason) {
// handle error
});
}).bind(this), function (reason) {
// authentication failed
});

The flow to authenticate and create a client object is similar across .NET and JavaScript, but you’re doing it in a way that should be natural to the language.

Along with the JavaScript files for these libraries, we are also including the TypeScript type definition (.d.ts)—in case you choose to develop your apps in TypeScript.

As you get started using these libraries, there are a few things to keep in mind. This is a very early preview release of the libraries that is meant to prove out the concept and get feedback on it. The libraries do not currently cover all the APIs provided by the services and some of the APIs in the library may not work. The APIs in the libraries themselves will definitely change in future updates.

Note that while we tend to call these “client” libraries, these also work with .NET server technologies like Asp.Net Web Forms and MVC, so you really get to target the breadth of the .NET platform.

 

Tooling updates

With today’s update of our Office 365 API Tools for Visual Studio 2013, the tool displays the available Office 365 services that you can add to your project. Once you’ve signed in with your Office 365 credentials, adding a service to your project is as easy as selecting the appropriate service and applying the required permissions.

dotnetvisualstudioupdate_01

Once you submit the changes, Visual Studio performs the following:

  1. Registers an application (if there isn’t an application registered yet) in Microsoft Azure Active Directory to consume Office 365 services.
  2. Adds the following to the project:
    1. Client libraries from Nuget for the configured services.
    2. Sample code files that use the Client Libraries.

Project types supported

With the broad reach of the client libraries, the Office 365 API tool is now available for a variety of project types (client, desktop, and web) in Visual Studio. Here’s are all the project types supported with the May update:

  • .NET Windows Store Apps
  • Windows Forms Application
  • WPF Application
  • ASP.NET MVC Web Application
  • ASP.NET Web Forms Application
  • Xamarin Android and iOS Applications
  • Multi-device hybrid apps

Installing the latest update

To install the latest update, you can either:

  • Check for updates within Visual Studio. To do so, follow these steps:
    1. In Visual Studio menu, click Tools->Extensions and Updates->Updates.
    2. You should see the update available for Office 365 API Tools.
    3. Click Update to update to the latest version.

–OR–

  • Download the extension and install it manually.

Once you’ve updated, you can invoke the Office 365 API tool as usual, that is, by going to your project node in the Solution Explorer and selecting Add->Connected Service from the context menu.

Looking forward to seeing your Apps out there when I visit the stores!!


MSDN references

Check also new SharePoint Online Solution Pack for branding and provisioning. This package contains also some examples, which originates from the AMS reference implementations. Here’s the direct links for the Solution Pack

You can find introduction to this SharePoint Online Solution Pack for branding and provisioning from following blog post – Introduction to SharePoint Online Solution Pack for branding and provisioning released.

Creating Your Own Document Management System With SharePoint and Dynamix

Image

With the R2 release of Dynamics AX 2012, a new feature was quietly snuck into the product that allows you to store document attachments from Dynamics AX within SharePoint rather than within an archive location, or within the database. This opens up a whole slew of possibilities when it comes to document management within SharePoint.

In this example we will show how you can create a document management structure within SharePoint that you can use in conjunction with the Dynamics AX attachments feature, and also we will show a few tweaks that you can make that may make managing your documents within SharePoint just a little easier.

Creating a new Document Management Site

The first step that we are going to work through is the creation of a new Document Management site where we will put all of our Dynamics AX document attachments. We are just creating a site to separate out the documents from other items that you may already have stored within SharePoint.

How to do it…

To create a new Document Management Site in SharePoint, follow these steps:

  1. Open up the SharePoint Workspace that you want to use to house your Document Management site and from the Site Actions menu, select the New Site option.
  2. When the Site Templates are displayed, select the Blank Site template. Give your site a name, and also a sub site name (probably the same as your site name). When you are done, click on the Create button for start the site creation process.

How it works…

When the site is created, you should be taken to a new blank site which you will be able to use as a document repository.

Creating Document Libraries for the Business Areas

The next step in the process is to create document libraries to store all of your documents away in. You could create one big library, or a number of smaller ones, broken out into groups based on business area or function. In this example we will do the latter because it will give us more flexibility with the indexing of the documents, and also make it easier to find particular documents.

How to do it…

To create document libraries for the business areas, follow these steps:

  1. From within your new Document Management site, select the New Document Library option from the Site Actions menu.
  2. When the Document Library Creation dialog box appears, give your library a Name, Description, and also set the Document Template to None. In this example we are creating a library for all of the AP Documents.
    When you are done, click on the Create button to start the document library creation process.

How it works…

After it finishes you should have a new library for you to use. You can repeat the process for all of the other business areas that you want to manage documents for – in our example we just used the standard business areas from the Dynamics AX navigation menu.

Creating Dynamics AX Document Types that Link to the Document Libraries

Once you have created your document libraries, you can connect them to Dynamics AX with the new SharePoint option so that the users are able to attach documents from the client and then store them within SharePoint for everyone to access.

How to do it…

To create a file attachment type that links to SharePoint, follow these steps:

  1. From the Organization administration area page, select the Document types menu item from the Document management folder of the Setup group.
  2. When the Document types maintenance form is displayed, click on the New button in the menu bar to create a new entry.
  3. We will start by creating a link for all of our generic Accounts Payable documents by giving our new Document type a Name and Description. In the Group select File from the dropdown options and select SharePoint for the Location option.
  4. Now return back to your document libraries within SharePoint and copy the URL for the document library.
  5. Paste the URL into the Archive
    directory field.

    Note: Remove all of the extra parameters though so that you are just referencing the base folder location.
    Also, if you click on the folder browser to the right of the Archive directory field you can test the link to SharePoint.

How it works…

Now, if you attach a document, then you will see the option for your new document type.

It will allow you to attach any file that you have on your desktop.

And rather than showing you the thumbnail image, it will show you a reference link to your SharePoint document library.

After attaching the document, if you look within SharePoint, you will see your document is saved away for you.

You can repeat this process for all of your other document libraries that were created in the previous step.

Adding Columns to your Document Libraries for Better Indexing

One of the reasons why you want to start using SharePoint is so that you can take advantage of the indexing functionality to code and classify your documents. Now that you have people storing the documents away, it’s time to add some indexes to you document libraries.

How to do it…

To create new indexes for your document libraries, follow these steps:

  1. Open up your document libraries within SharePoint and select the Library ribbon bar. Then click on the Create Column button within the Manage Views group.
  2. When the Create Column form opens, set the Column Name to be the field that you want to index, select the type of the column, and also set the columns Description.
  3. Note: Sometimes it’s a good idea not to have spaces in the column name, later on when we add filters, it becomes a litter easier to manage this way.
  4. After you have finished defining the column, click on the OK button to add the column to your library.
  5. When you return back to your document library, there will be a new column on the form.
  6. Repeat the process for all of the columns that you want to use as index fields for the library.

    Note: All of the columns do not have to be used during the indexing process, so it’s OK to have variations of columns, like InvoiceNum, CreditNoteNum, etc.

How it works…

To edit the columns, select the options menu for the document, and choose the Edit Properties option.

This will allow you to update the fields that Dynamics AX did not populate initially.

Now when you look at the document within SharePoint, you will see the additional metadata that is associated with the document.

Embedding Document Libraries into Dynamics AX Forms

Now that we are able to index documents a little more effectively within SharePoint, we can go the extra step, and link SharePoint to our forms within Dynamics AX so that we are able to access them without even leaving the application. Doing this just requires a little bit of coding, but is well worth the effort.

Getting Started…

You can manipulate the information that is displayed by SharePoint, and also how it is displayed through the URL that you use.

If you filter any of the views, then you will notice that it uses two qualifiers – FilterFieldX
and FilterValueX to restrict the viewed records.

Also, if you add a IsDlg=1 qualifier, then all of the navigation areas are hidden giving you a clean list of filtered documents.

This is the perfect type of view to embed within Dynamics AX.

The other half of this step is to choose a form to add your document libraries to. In this case we will update the Vendors form.

How to do it…

To embed your SharePoint document libraries within Dynamics AX forms, follow these steps:

  1. Start the process by opening up AOT, and create a new project for this tweak.
  2. From the Forms area in the AOT, drag over the form that you want to add the documents to – in this case it’s the VendTable form.
  3. Expand out the form within the project, and navigate to the group that you want to add your document library view into.
  4. Right-mouse-click on the parent tab, and select the TabPane option from the New Control sub-menu.
  5. Reorder your tabs (ALT+UpArrow/DownArrow) so that they are in the sequence that you want and then give your new Tab Control a Name and Caption in the Properties section.
  6. Right-mouse-click on the new tab that you added for the document library and select the ActiveX control from the New Control sub menu.
  7. When the list of ActiveX controls are displayed, select the Microsoft Web Browser control, and click the OK button to add it.
  8. Rename your ActiveX control, and set the Width to Column width
    and Height to Column height.
  9. Now we need to have Dynamics AX update the URL that is navigated to when the form is opened. To do this, right-mouse-click on the parent Methods group for the form, and select the activate method from the Override methods submenu.
  10. Update the activate method by building the URL that will define the specific document index that you are wanting to show. You are able to now add conditional filters that pick up the record values, and filter based on the current record – in this case the vendor number.
  11. Once you have finished the update, save the project.

How it works…

Now when you open up the Vendor form, there will be a Documents tab that shows all of the documents that are associated with the current record.

If you select a record that does not have attached documents, then you will not see anything there.

How cool is that.

Creating Custom Views for the Document Libraries

Now that all of the heavy lifting has been done, you can now start tweaking the SharePoint libraries and the way that the information is displayed. Based on the form that you are in you may want to show only particular information. You can do that by creating new custom views.

How to do it…

To create a custom view for your document libraries, follow these steps:

  1. Open up your document libraries within SharePoint and select the Library ribbon bar. Then click on the Library Settings button within the Settings group.
  2. When the document library settings maintenance form is displayed, scroll down to the bottom of the page, and there will be a section for Views that will show you all of the different ways that the form could be displayed. In this case there is only one, but we can fix that by clicking on the Create View link.
  3. Select the Standard View option from the format templates that are displayed.
  4. Assign your new view a Name and select the fields that you want to be displayed in the view.
  5. Once you have made the changes, click on the OK button to save your new view.

How it works…

When you return to the document library you will be able to see the new format of the view.

You can then change the view within the URL of your project to make it the default view for the form.

Now when you see all of the documents within your Dynamics AX form you will see just the information that you need.

Using the SharePoint Designer to Edit Document Libraries

Although you can do everything that we have shown so far within SharePoint, you can also take advantage of the SharePoint Designer application to update your SharePoint document libraries. You don’t even have to search for the install kit, because it is embedded within your SharePoint site, just waiting to be downloaded and installed.

How to do it…

To access the SharePoint Designer to manipulate your SharePoint site, follow these steps:

  1. To use the SharePoint Designer to update your SharePoint site, just select the Edit in SharePoint Designer option from the Site Actions menu.

    Note: If you don’t have SharePoint Designer installed then it will ask you to install it, and download the kit directly from your SharePoint installation.

How it works…

When SharePoint Designer opens up, it will be connected to your current SharePoint site, showing you all of the libraries, etc. that you have been creating.

If you select the Lists and Libraries option from the navigation pad, you will be able to see all of the document libraries that you created in the previous steps.

Drilling into the libraries you will be able to also see all of the views etc. that you configured within SharePoint.

Creating New Content Types to Manage Document Types

When we set up our document libraries we deliberately created them so that all of the documents for a particular area are within the same library. This allows us to save multiple types of documents away within the library like Invoices, Credit Notes, Vendor Certificates etc. The way that we can identify the type of document is through the creation of Content Types.

How to do it…

To create custom Content Types to help make classification easier, follow these steps:

  1. Open up SharePoint Designer (although you can also do this within SharePoint itself) and select the Content Types from the navigation menu and click on the Content type menu button within the New group
    of the Content Types ribbon bar.
  2. When the Content Type creation form is displayed, give your Content Type a Name, and Description, select a parent content type, and also a group that you want to show the Content Type in.

    Note: For the first content type that you create, you may want to create a new Content Type Group so that it isn’t intermingled with all of the other content types.

  3. When you have finished creating your Content Type click on the OK button to add it to SharePoint.
  4. When you return to your SharePoint Designer workspace you will be able to see your new Content Type.
  5. Repeat the process for all of the other types of documents that you want to file away within SharePoint.
  6. Now we need to enable Content Types within our document libraries, and then assign them. To do this, open up your Document Library within SharePoint Designer, and within the Settings group, check the Allow management of content types check box.
  7. Then click on the Add button to the right of the Content Types group to open up the Content Type Picker. Find the new Content Types that you just created, and click on the OK button to assign them to your Document Library.
  8. Now the Content Type will show up as a valid option for the document library.
  9. Repeat the process for all of the other content types that you created.

How it works…

Now when you edit the properties for your documents, there will be a new indexing option for your documents that allows you to define the type of document that you are looking at.

Specifying Document Columns by Content Types to Simplify Indexing

There is an additional benefit that you get from using Content Types within SharePoint, which is the ability to specify what columns are applicable to different Content Types at the time of indexing. For example, you probably don’t want to specify a Invoice Number when indexing a Vendor Insurance Certificate, but would definitely would want to when indexing an Invoice and even a Credit Note.

How to do it…

To modify your Content Types within your Document Libraries to only require certain columns to be indexed, follow these steps:

  1. From within your SharePoint Document Library (or from within SharePoint Designer) click on the Library Settings button within the Settings group of the Library ribbon bar.
  2. Within the Library Settings you will be able to see all of your Content Types that have been assigned. Select any of them to edit their options.
  3. When you first create the Content Types then they will have no columns assigned to them. Click on the Add from existing site or list columns link to assign the valid columns to your Content Type.
  4. When the Add columns to Content Type form is displayed, you will be able to see all of the available columns within the Document Library.
  5. Just select the ones that you want to use for the indexing, and then click the Add button. Once you have selected all the ones that you want to use, click on the OK button to save your changes.

How it works…

Now you will have indexing by Content Type.

Showing the Content Type in the Document View

Now that we are classifying documents by Content Type we might as well show it in the views so that we are able to differentiate different document types.

How to do it…

To add the Content Type field to our Document View, follow these steps:

  1. From within your SharePoint Document Library (or from within SharePoint Designer) click on the Modify View button within the Manage Views group of the Library ribbon bar.
  2. Now that the Content Type is enabled on our Document Library, it will show up on the list of valid columns. To add it to our view, just check the Display checkbox, and possibly change the order of the field so that it is first in the table.
  3. When you’re done, click on the OK button to update the view.

How it works…

Now the Content Type is shown in the document library view.

And also will show up when we browse to the documents within Dynamics AX.

How neat is that.

Grouping Records in the Document View by Key Columns

One final tweak that we will show within SharePoint is the ability to group columns within our document library views so that common information is shown together. These groupings can be different by view, and just make it a little easier to find information if we don’t’ initially filter the data.

How to do it…

To group records within your document library view, follow these steps:

  1. From within your SharePoint Document Library (or from within SharePoint Designer) click on the Modify View button within the Manage Views group of the Library ribbon bar.
  2. Scroll down your view definition until you see the Group By configuration options. Here you will be able to change the Group By fields.

How it works…

Now when you look at your documents, they will be classified by key fields.

Summary

In this walkthrough we have shown how you can:

  • Create a simple document management repository within SharePoint
  • Link the document attachments function within Dynamics AX to SharePoint to make the acquisition of the documents easier
  • Index your documents more effectively by defining custom columns
  • Embed SharePoint back into Dynamics AX and also
  • Tweak your views within SharePoint to make it easier to find and view documents

This is really just a starting point, and once you have mastered the basics, you can start investigating:

  • Assigning workflows to documents for approvals and updates
  • Enabling version control for your documents
  • Acquire documents into SharePoint through scanning technologies
  • Link the index column fields to Dynamics AX for validation of key information
  • And much more.

SharePoint is a great document management tool, and can usually handle all of your document indexing needs. Especially now that it is connected with Dynamics AX natively.

How to add a Link to a Document external to SharePoint

Image

You can add links to external file shares or/and file server documents to your document library very easily. Why would you want to do this? Primarily so all your MetaData to all your documents are searchable in the same place.First a Farm Administrator will need to modify a core file on the front end server.  Then you must create a custom Content Type. If you use the built in content type you will not be able to link to a Folder directly.
Edit the NewLink.aspx page to allow the Document Library to accept a File:// entry.

  1. Go to the Front End Web Server \12\template\layouts directory.
  2. Open the file NewLink.aspx using NotePad. If I have to tell you to take a backup of this file first then you have no business editing this file (really).
  3. Go to the end of the script section near top of page and add:

    function HasValidUrlPrefix_UNC(url)
    {
    var urlLower=url.toLowerCase();
    if (-1==urlLower.search(”^http://”) &&
    -1==urlLower.search(”^https://”) && -1==urlLower.search(”^file://”))
    return false;
    return true;
    }

  • Use Edit Find to search for HasValidURLPrefix and replace it with HasValidURLPrefix_UNC (you should find it two times).
  • File – Save.
  • Open command prompt and enter IISreset /noforce.

Important: To link to Folders correctly you must create your own content type exactly as below and not use the built in URL or Link to Document at all.

Create custom Content Type

  1. Go to your Site Collection logged in as a Site Collection Administrator.
  2. Site actions – Site Settings – Modify All Site Settings.
  3. Content Types
  4. Create
  5. Name = URL or UNC
  6. Description = Use this content type to add a Link column that allows you to put a hyperlink or UNC path to external files, pages or folders. Format is File://\\ServerName\Folder , or http://
  7. Parent Content Type,
    1. Select parent content type from = Document Content Types
    2. Parent Content Type = Link to a Document
  8. Put this site content type into = Existing Group:  Company Custom
    1. image
  9. OK
  10. At the Site Content Type: URL or UNC page click on the URL hyperlink column and change it to Optional so that multiple documents being uploaded will not remain checked out.
  11. OK
    1. image

Add Custom Content Type to Document Library

  1. Go to a Document Library
  2. Settings – Library Settings
  3. Advanced Settings
  4. Allow Management Content Types = Yes
  5. OK
  6. Content Types – Add from existing site content types
  7. Select site content types from: Company Custom
  8. URL or UNC – Add – OK
  9. Click on URL or UNC hyperlink
  10. Click on Add from existing site
  11. Add all your Available Columns – OK
  12. Column Order – change the order to be consistant with the Document content type orders.
  13. Click on your Document Library breadcrumb to test.
  14. View – Modify your view to add the new URL or UNC column to your view next to your Name column.

Create Link to Document

  1. Go to the Document Library
  2. New – URL or UNC
  3. Document Name: This must equal the exact file or folder name less the extension.
    1. Example: My Resume 
    2. Example: Folder2
    3. Example: Doc1
  4. Document URL: This must be the UNC path to the folder or file.
    1. Example: http://LindaChapman.BlogSpot.com/Folder1/Folder2/My Resume.doc
    2. Example: http://LindaChapman.BlogSpot.com/Folder1/Folder2
    3. Example: File://\\ServerName\FolderName\FolderName2\Doc1.doc

You might see other blogs that say you can’t connect to a folder and must create a shortcut first. They are wrong. You can by the method above.

The biggest mistakes I see are:

  1. People click on the NAME field instead of the URL field. They are not the same. You MUST click on the URL field to access the Folder properly.
  2. People use the built in Link to Document content type thinking it is the same or will save them a step. It is not the same.
  3. People type the document extension in the Name field. You can not type the extension in the name field. It will see it is a UNC path and ignore the .aspx extension.
  4. People enter their slashes the wrong direction for UNC paths.

Tool to analyse and then upgrade your old SharePoint VBA Web Parts to Apps!!

office365[1]

Welcome to Microsoft VBA and SharePoint Code Analyzer

 Now is the time to still use that old VBA code you have!!

This is an online tool where you can upload your file and generate reports collecting detailed statistics about the user’s VBA and SharePoint source code files, providing useful information about migrating VBA and SharePoint applications.

To analyze your files please follow up this simple 4 steps:

In Depth Look : Private Cloud Infrastructure as a Service Capabilities

saas[1]

 

The primary purpose of a Private Cloud Infrastructure as a Service capability is to provide well managed infrastructure services to the Platform and Software Layers. To achieve this, the Infrastructure Layer, highlighted in the Private Cloud Reference Model diagram below, includes five capabilities.


Figure 1: Private Cloud Reference Model

This document describes these Infrastructure Layer capabilities and the impact of Private Cloud Infrastructure as a Service (IaaS) patterns on their planning and design. These patterns are defined in the Private Cloud Principles, Concepts, and Patterns document and are summarized here:

  • Resource Pooling: Divides resources into partitions for management purposes.
  • Physical Fault Domain: The group of physical resources dependent on a single point of failure such as an Uninterruptible Power Supply (UPS).
  • Upgrade Domain: A group of resources upgraded as a single unit.
  • Reserve Capacity: Unallocated resources, which take over service in the event of a failed Physical Fault Domain.
  • Scale Unit: A collection of resources treated as a single unit of additional capacity.
  • Capacity Plan: A model that enables a private cloud to deliver the perception of infinite capacity.
  • Health Model: Defines how a service or system may remain healthy.
  • Service Class: Defines services delivered by Infrastructure as a Service.
  • Cost Model: The financial breakdown of a private cloud and its services.

The Health Model, Service Class, and Scale Unit patterns directly affect Infrastructure and are detailed in the relevant sections later. Conversely, private cloud infrastructure design directly affects Physical Fault Domains, Upgrade Domains, and the Cost Model. These relationships are shown in Figure 2 below.


Figure 2: Infrastructure Relationship with Patterns

Background

The private cloud principles “perception of continuous availability” and “resiliency over redundancy mindset” are designed to make a private cloud architect think differently.

Traditional solutions rely heavily on redundancy to achieve high availability and avoid failure. But redundancy at the facility (power) and infrastructure (network, server, and storage) layers is very costly. Modern cloud applications are designed with a different, holistic approach to achieving availability. This means shifting focus from building redundancy into the facility and physical infrastructure to engineering the entire solution to handle failures — eliminating them, or at least minimizing their impact.

This approach to availability relies on resilience as well as redundancy. Resilience means rapid, and ideally automatic, recovery from a failure. Redundancy is typically achieved at the application level. (A non-cloud example is Active Directory®, where redundancy is achieved by providing more domain controllers than is needed to handle the load.)

Customer interest in cost reduction will help drive adoption of this approach over the medium term. Removing power redundancy from racks or co-location rooms has a big impact on operational expenses, but this typically occurs only when the hosted application doesn’t have to be highly available, or when high availability is achieved through redundancy at the application layer – for example, Active Directory replication, or application layer mirroring such as SQL Server™ mirroring. Combining reductions in physical redundancy with virtualization results in lower capital and operational expenditure compared to a highly redundant infrastructure.

Applications that depend on a highly available infrastructure will not achieve their Service-Level Agreement (SLA) when placed on the type of infrastructure defined earlier. Customers are therefore likely to develop two environments when designing their private cloud: a standard environment with reduced facility and infrastructure redundancy, and a high-availability environment with traditional levels of redundancy.

Standard Environment

High-Availability Environment

No power redundancy to the rack (for example one in-rack UPS) Redundant power to each server
No network redundancy to the servers (redundant core network) Redundant network connections to each server
Local storage, possibly redundant storage, and storage network Redundant storage presented to each server
Ideally no migration or possibly quick migration Live Migration

These two environments allow a Architect to differentiate service classifications from a high-availability perspective. The standard environment is appropriate for stateless workloads; stateful workloads will require the high availability environments. Stateful and stateless machines are managed differently. Statefulness will likely appear as a characteristic of the service classifications.

Stateless workloads (web servers, for example) are typically redundant at the server level via a load-balanced farm. These servers could easily be hosted in the Standard Environment. If all stateless workloads had an automated build, the Standard Environment could do away with any form of VM migration – and simply deploy another VM after destroying the existing one, thereby saving the cost of shared storage.

Stateful workloads, on the other hand, require a specific management approach and impose higher costs on the consumer. Unless designed for high availability at the application level, they will require some form of redundancy in the infrastructure. Further, the High-Availability Environment requires Live Migration to enable maintenance of the underlying fabric and load balancing of the VMs.

Security

The number one concern of customers considering moving services to the cloud is security. Recent concerns expressed in the industry forums are all well founded and present reasons to think through the end to end scenarios and attack surfaces presented when deploying multiple services from various departments in an organization on a private cloud.

In a cloud-based platform, regardless of whether it is private or public cloud, customers will be working on an essentially virtualized environment. The platform or software will run on top of a shared physical infrastructure managed internally or by the service provider. The security architecture used by the applications will need to move up from the infrastructure to the platform and application layers. In private cloud security this will provide security in addition to the perimeter network.

Public cloud involves handing over control to a third party, sharing services with unrelated business entities or even competitors and requires a high degree of trust in the providers security model and practices. In many ways the security concerns of private cloud and similar those of self-hosted or outsourced datacenter however the move to a virtualized self-service service oriented paradigm inherent in private cloud computing introduces some additional security concerns.

First is the isolation of tenants from each other and the hosting infrastructure at both the compute and network layers. Virtualization is a part of any private cloud strategy and the security of this model is totally dependent on the ability to isolate one tenant from another and prevent the careless or malicious tenant from impacting the stability of the core infrastructure upon which all tenants rely.

Another concern is Authentication, Authorization and Auditing of access to the cloud services. Self-service implies that tenant administrators can initiate management processes and workflows where previously this was accomplished through IT. For any misconfiguration or excessive permissions granted to these users can impact the stability or security of the cloud solution.

Many private cloud security concerns are also shared by traditional datacenter environment which is not surprising since the private cloud is just an evolution of the traditional datacenter model. These include:

  • Impact the confidentiality, integrity or availability through exploitation of software vulnerabilities.
  • Unauthorized access due to weak or misconfiguration.
  • Impact to confidentiality, integrity or availability by malicious code.
  • Impact to confidentiality, integrity or availability of data.
  • Compliance with internal or industry specific regulations and standards.

Secure Virtualization Platform

The biggest risk in running in a multi-tenant virtualized environment is that a tenant running services on the same physical infrastructure as you can break out of its isolating partition and impact the confidentiality, integrity or availability of your workload and data. Therefore the security in virtualization platform is key in the isolation and non-interference between the individual virtual machines running on the infrastructure.

Highly Automated Management, Monitoring and Reporting

Many management tasks involve multiple steps that must be completed in the proper sequence by multiple administrators across multiple systems. Any shortcuts, omissions or errors can leave assets vulnerable to unauthorized access or affect the reliability of components within a solution. By orchestrating discreet management and monitoring tasks into workflows that require proper authorization and approval greatly diminish the chance of mistakes that affect the security of the solution.

Authentication, Authorization and Auditing

Most organizations have a common capability for providing an overarching framework for authentication and access control and then a private cloud introduces all parts of hosting and hosted services that include the hosting infrastructure and the virtual machines workloads that run in that infrastructure. This framework must be designed and possibly extended to provide a single point of managing identities and credentials, authentication services and common security model for access to resources across the private cloud.

Multi-layer Security

Moving to a cloud-based platform requires a change in mind-set of developers and IT security professionals. Some of the risks of the public cloud are mitigated by using a private cloud architecture, however, the perimeter security protecting a private cloud should be seen as an addition to public cloud security practices, not an alternative. You cannot apply the traditional defense-in-depth security models directly to cloud computing, however you should still apply the principal of multiple layers of security. By taking a fresh look at security when you move to a cloud-based model, you should aim for a more secure system rather than accepting security that continues with the current levels.

Security Governance

Enterprise IT systems are now typically well regulated and controlled. The security risks are well documented and therefore proper processes are put in place to develop new applications and systems, or to provision them from 3rd party vendors. It is very unlikely that a department manager would be able to purchase and install software without approval from the IT department.

With public cloud systems and Web browser clients however, it is possible that individual department managers could bypass the IT department and provision public cloud-based software. Indeed, they might use free cloud storage systems as a convenient means to synchronize documents without even considering that they are using public cloud services. Public cloud systems might be appealing to a manager as they could very quickly provision a new system and remove what they might see as unnecessary bureaucracy. They may even be unaware of the security and compliance policies that are in place to protect the organization. In a cloud-based landscape, we must protect corporate systems and data from these unauthorized, untested systems.

Facilities

Facilities represent the physical components – buildings, racks, power, cooling, and physical interconnects – that house or support a private cloud. It is beyond the scope of this document to provide detailed guidance on facilities, but the private cloud principles affect facility design.

The definition of a Scale Unit impacts power, cooling, space, racking, and cabling requirements. The team that defines a Scale Unit should include personnel that design and manage these aspects of the facility in addition to the procurement, Capacity Planning, and Service Delivery teams. The following table lists some ramifications of Scale Unit size choices from a facilities perspective.

Small Scale Unit

Large Scale Unit

Benefit

Trade-off

Benefit

Trade-off

  • Lower amount of physical labor needed to add a Scale Unit
  • Complicates the Resource Pool, Fault Domain, and Reserve Capacity equation
  • Inefficient
  • Stranded power (un-utilized power)
  • Un-utilized space
  • Allocation of full facilities units (for example, UPS, Rack, and Co-location Room) is easy to cost and engineer
  • Reduces under-utilization of power, cooling, and space
  • Higher amount of labor to commission

Knowing how much power, cooling, and space each Scale Unit will consume enables the facilities team to perform effective Capacity Planning and the engineering team to effectively plan resources.

Compute, Network, and Storage Fabric

The term Fabric defines a collection of interconnected compute, network, and storage resources.

The concept of homogenous physical infrastructure, introduced in the Private Cloud Principles, Concepts, and Patterns guide, stipulates that all servers in a Resource Pool should be identical. Homogenizing the compute, storage, and network components in servers allows for predictable scale and performance. In other words, every server in a Resource Pool should have the same processor characteristics such as family (Intel/AMD), number of cores/CPUs, and generation (Xeon 2.6 Gigahertz (GHz)). The homogenized compute concept also stipulates that each server have the same amount of Random Access Memory (RAM) and the same number of connections to Resource Pool storage and networks. With these specifications met, any virtualized service could relocate from one failing or failed physical server to another physical server and continue to function identically.

Physical Server

The physical server hosts the hypervisor and provides access to the network and shared storage. In the Standard Environment, the facilities do not provide power redundancy, so the servers do not require dual power supplies.

Every server will be a member of a single compute Resource Pool and a single Physical Fault Domain. Assuming all servers are homogeneous (as recommended), they will all be members of a single Upgrade Domain.

Capacity Planning must be done for each server specification, as its size (CPU and RAM specification) will determine how many virtual machines it is able to host. This is covered in greater detail in the Private Cloud Planning Guide for Service Delivery.

Server specification selection impacts the Scale Unit, Cost Model, and service class. Scale Units have a finite amount of power and cooling, so server efficiency has an impact on a private cloud. It may be that all power in a Scale Unit is consumed before all physical space. The cost of servers impacts the Cost Model irrespective of whether this cost is passed onto the consumer. Selecting only small one-unit servers will limit the architect’s ability to define a range of service classifications. The server needs to accommodate the largest service classification after the parent partition and hypervisor consume their resources.

Microsoft research shows servers with processors one or two models behind the latest versions offer a better price, performance, and power consumption ratio than the newer processors.

The Private Cloud Reference Architecture dictates that the “concept of homogenization of physical infrastructure” be adopted for each Resource Pool. Server specifications (CPU, RAM) may vary between Resource Pools, but this complicates Fabric Management (defined in the Private Cloud Planning Guide for Systems Management), which spans Resource Pools and Capacity Planning, and may necessitate different service classes for each pool.

Delivering IaaS requires that the service is pre-defined and delivered consistently. To achieve consistent performance, the VMs must have equal resources available to them from each server, in other words, the same CPU cycles and RAM. If servers within a Resource Pool do not provide homogeneous performance and RAM, consistent performance cannot be guaranteed.

Absolute homogenization may be hard to maintain over the long term as server models may be discontinued by the vendor; therefore relationships between Resource Pools, Scale Units, and server model longevity must be considered carefully.

The following table lays out some of the benefits and trade-offs of homogeneous and heterogeneous Resource Pools.

Homogeneous Physical Infrastructure

Heterogeneous Physical Infrastructure

Benefit

Trade-off

Benefit

Trade-off

  • Predictable performance within a Resource Pool
  • Guaranteed Live Migration across the fabric
  • Reuse of existing equipment may not be possible
  • Possible reuse of existing equipment
  • Allows for a broader range of server classes
  • VMs cannot be moved between Resource Pools
  • More upfront work to make sure Live Migration will work appropriately

In addition, servers should support the following requirements to achieve an automated infrastructure and resiliency:

Automated Infrastructure

  • Wake On Local Area Network
  • Remote BIOS Upgrades/Configurations
  • Boot from Flash
  • Pre-Boot Execution Environment (PXE) for remote imaging
  • Virtualization Support
    • Data Execution and Prevention
    • 64 bit CPUs
  • Standard Environment: 2 Network adapters that support TCP offload (TOE)
    • Management x 1
    • Consumer x 1
  • High-Availability Environment: 4 or 6 redundant network adapters that support TOE
    • Management x 2: Could be teamed for redundancy
    • Live Migration x 2: Could be teamed for redundancy
    • Consumer x 2: Could be teamed for resiliency
  • Standard Environment: Storage connections that meet the required service classification
    • For Internet Small Computer System Interface (iSCSI), 1 x Hardware iSCSI initiators: Could use vendor-specific software to achieve resiliency
    • For Fiber Channel, 1 x Fiber Channel host bus adapter (HBA): – Could use vendor-specific software to achieve resiliency
  • High-Availability Environment: Redundant storage connections that meet the required service classification
    • For iSCSI, 2 x Hardware iSCSI initiators: Could use vendor-specific software to achieve resiliency
    • For Fiber Channel, 2x Fiber Channel HBAs: Could use vendor-specific software to achieve resiliency

To dynamically initiate remediation events in case of failure or impending failure of server components, each server is required to display warnings, errors, and state information for the following:

  • CPU
    • State (Busy/Ready)
    • Utilization
    • Heat
    • Fans
  • RAM
    • Utilization
    • Error-Correcting Code (ECC) Errors
  • Storage
    • Read/Write Failures
    • Predictive Failures
  • Network Interface Cards (NICs)
    • Port State
    • Send/Receive Errors
  • Motherboard
    • Server Post Errors
  • Power Supply
    • State
    • Active / Passive
    • Power Output Variations
  • Fans
    • Speed
    • State

Storage

To achieve the perception of infinite capacity, proactive Capacity Management must be performed, and storage capacity added ahead of demand. The amount of storage added as a single unit (a Storage Scale Unit) will depend on the rate of storage consumption, hardware vendor lead time, and the level of risk the business wishes to assume (that is, weighing remaining unallocated capacity against the possibility of exhausting all capacity). This is detailed in Private Cloud Planning Guide for Service Delivery.

Storage will be placed in Storage Resource Pools, from which it is automatically allocated to consumers. Though Resource Pools are not a new concept for Storage Area Networks (SANs), allowing the infrastructure to allocate storage on-demand based on policy may be a new approach for many organizations. Further, the SAN must present an application programming interface (API) to Fabric Management to allow automation of allocation and provisioning.

The storage provided within a private cloud must be consistent in performance and availability. This means the Input/output (I/O) Operations per Second (IOPS) cannot vary significantly. If there is a need to make different levels of storage performance available to users of a private cloud, it can be accomplished through multiple service classifications. A private cloud is intended, however, to provide a limited set of standardized services; therefore, variances should be carefully considered.

The cost of providing the storage within a private cloud should be clearly defined. This permits metering, and possibly allocation of costs to consumers. If different classes of storage are provided for different levels of performance, their costs should be differentiated. For example, if SAN is being used in an environment, it is possible to have storage tiers where faster Solid State Drives (SSD) are used for more critical workloads. Less-critical workloads can be placed on a Tier 2 Secure Attention Sequence (SAS), and even less-critical workloads on Tier 3 SATA drives.

The Private Cloud Reference Architecture assumes the storage arrays and the storage network are redundant, with no single point of failure beyond the array itself. In this regard, the storage array can be considered a Fault Domain.

The design should adopt some form of de-duplication technology to reduce storage consumption.

As the storage array is a single point of failure, it should display health information to the systems monitoring service to make sure that any outages and their impact are quickly identified. Providing snapshots and mirroring between arrays for continuity is beyond the scope of this guidance.

Physical Storage Switches

If a Architect follows the recommendation to allow any VM to execute on any server in a Resource Pool, Virtual Hard Disks (VHDs) should reside on a SAN. While it is possible to host VHDs locally, the guidance assumes that they are hosted on a SAN.

A key decision in private cloud design is whether to use iSCSI or Fiber Channel for storage. If iSCSI is utilized to house virtual workload storage, it is suggested that each virtualization host include iSCSI HBAs instead of standard NICs for performance reasons.

The purpose of a storage switch is to provide resilient and flexible connectivity between shared storage and physical servers. The storage switch must meet peak storage I/O requirements for the virtual services. In addition, the interconnect speeds between switches should be evaluated to determine the maximum throughput for switch-to-switch communications. This may limit the maximum number of hosts that can be placed on each switch.

While switch throughput is important, attention should also be paid to the number of available switch ports needed to support the physical virtualization hosts. Refer to the switch hardware vendor to make sure it meets these requirements.

Physical storage switch requirements include:

  • Dedicated switch port on each switch for each host and storage processor connection. This is needed for redundancy and I/O optimization.
  • iSCSI traffic separated from all other IP traffic, preferably on its own switched infrastructure or logically through a virtual local area network (VLAN) on a shared IP switch. This segregates data access from traditional network communications for host-to-host and workload operations and provides data security.
  • Redundant power supplies and cooling fans increase the number of faults the storage switch can withstand.
  • Programmatic interface to support firmware upgrades and configuration.

Physical Storage Subsystem

Stateless workloads can be hosted on Direct-Attached Storage (DAS) instead of SAN, driving down the cost of service. The downside is that Fabric Management has to handle transitioning active user connections between VMs homed on different hosts, as VM migration is impossible. This may mean tighter integration with the network than is specified in this document (in order to know when all connections to a VM have been abandoned or terminated before stopping the VM, for example).

SAN storage, while more expensive, provides advantages:

  • The VM can be re-homed to other servers.
  • Live Migration can be employed.
  • Backup (of the VM) can occur out of band (for example, taking snapshots).
  • Capacity can be increased almost limitlessly.

The logical storage configuration (or storage classification) should be designed to meet requirements in the following areas:

  • Capacity: To provide the required storage space for the virtual service data and backups.
  • Performance Delivery: To support the required number of IOPS and throughput.
  • Fault Tolerance: To provide the desired level of protection against hardware failures. If a SAN is used, this may include redundant HBA and switches.
  • Manageability: To provide a high degree of platform self-management. This requires a programmatic interface to provide automated configuration and firmware upgrades.

Additionally, a private cloud must meet the following requirements to make sure that it is highly available and well-managed:

  • Multiple paths to the disk array for redundancy. Should a disk fail, hot or warm spare disks can provide resiliency in the provisioned storage. Consult the storage vendor for specific recommendations.
  • A storage system with automatic data recovery, to allow an automatic background process to rebuild data onto a spare or replacement disk drive when another disk drive in the array fails.
  • Redundant power supplies and cooling fans, to increase the number of faults the Storage Array can withstand.

Network

The Private CLoud Reference Architecture assumes that the network presented to servers is not redundant for the Standard Environment and is redundant for the High-Availability Environment.

The network is tightly coupled with physical servers. Each Compute Resource Pool includes the network switches necessary for the servers to operate; each Scale Unit includes a pre-defined and fixed number of servers and switches.

The switches must be monitored to make sure no workloads saturate the network. A private cloud is designed as a general-purpose infrastructure. Workloads that challenge the network with high utilization may not be good candidates for a private cloud unless separate Resource Pools are created specifically to handle these workloads.

Switches are members of network upgrade domains, but the definition and membership of upgrade domains will likely vary depending on the nature of the upgrade. If switches are not redundant (for example, in the Standard Environment), the whole Resource Pool will need to be taken offline for switch maintenance, which requires switch reboots.

Network hardware (switches and load balancers) must display an API to Fabric Management that enables automated management of networks such as creation of VLANs, Virtual IP addresses (VIP), and addition or removal of hosts from the VIP.

Physical Network

Some key decisions that should be made to increase the bandwidth of the physical networks are related to the use of Live Migration requirements of port security, and the need for link aggregation. Here is a table showing the benefits and trade-offs of using Live Migration:

Use Live Migration

Do Not Use Live Migration

Benefit

Trade-off

Benefit

Trade-off

  • Transparent movement of Stateful applications
  • Transparent infrastructure upgrades
  • Additional network switch ports will be required
  • More network adapters are required per virtualization host
  • Greater Reserve Capacity may be required because of cluster size limitations of 16 nodes
  • Less switch ports are required
  • Fewer network adapters are required per virtualization host
  • Ideal for stateless applications
  • No transparent movement of Stateful applications
  • For Stateful applications, infrastructure upgrades will need to be coordinated with VM owners

To support the dynamic characteristics of a private cloud, a network switch should support a remote programmatic interface – for firmware upgrades, and prioritization of traffic for quality of service. These switches should be dedicated for a private cloud to maintain predictable performance and to minimize risks associated with human interaction. As defined earlier, the servers need to be connected to at least two networks, management and consumer, with live migration (if required). The connections should always be the same; for example, network adapter 1 to management, network adapter 2 to consumer, and network adapter 3 to Live Migration.

If iSCSI is chosen for the storage interconnects, iSCSI traffic should reside in an isolated VLAN in order to maintain security and performance levels. This iSCSI traffic should not share a network adaptor with other traffic, for example the management or consumer network traffic.

The interconnect speeds between switches should be evaluated to determine the maximum bandwidth for communications. This could affect the maximum number of hosts which can be placed on each switch.

When designing network connectivity for a well-managed infrastructure, the virtualization hosts should have the following specific networking requirements:

  • Support for 802.1Q VLAN Tagging: To provide network segmentation for the virtualization hosts, supporting management infrastructure and workloads. This is the preferred method to help secure and isolate data traffic for a private cloud.
  • Remote Out-of-band Management Capability: To monitor and manage servers remotely over the network regardless of whether the server is turned on or off.
  • Support for PXE Version 2 or Later: To facilitate automated server provisioning.

To dynamically initiate remediation events in response to the failure or impending failure of network switch components, each switch is required to display warnings, errors, and state information for the following:

  • CPU
    • Utilization
    • Temperature
  • Flash Memory
    • Utilization
  • Interface Details
    • Port State
    • Port Errors
    • Bandwidth Utilization
  • Power Supply
    • State
    • Active / Passive
    • Power Output Variations
  • Fans
    • Speed
    • State

Storage Switch/Subsystem Health Model

To dynamically initiate remediation events in response to either the failure or impending failure of storage switches and storage subsystem components, each component is required to display warnings, errors, and state information for the following:

Storage Switch

  • CPU
    • Utilization
    • Temperature
  • Flash Memory
    • Utilization
  • Interface Details
    • Port State
    • Port Errors
    • Bandwidth Utilization
  • Power Supply
    • State
    • Active / Passive
    • Power Output Variation
  • Fans
    • Speed
    • State

Storage Subsystem

  • CPU
    • Utilization
    • Temperature
  • Flash Memory
    • Utilization
  • Service Processor
    • State
    • Errors
    • IOPS
  • Disks
    • Read / Write Failures
    • Predictive Failures
  • Power Supply
    • State
    • Active / Passive
    • Power Output Variations
  • Fans
    • Speed
    • State

Hypervisor

The hypervisor exposes the VM services to consumers. It needs to be configured identically on all hosts in a Resource Pool, and ideally all hosts in the private cloud. Fabric Management will orchestrate the addition of virtual switches, machines, and disks.

An architect needs to decide whether the private cloud should use CPU Resource Reservations to make sure of predictable performance of VMs. This table lists the benefits and trade-offs:

Use CPU Resource Reservations

Do Not Use CPU Resource Reservations

Benefit

Trade-off

Benefit

Trade-off

  • Consistent VM performance for consumers
  • Fixed number of VMs per host might lead to low utilization of resources
  • Toolset may not set resource reservations
  • Variable number of VMs per host means resource utilization can be maximized
  • Consumers do not experience consistent VM performance
  • One VM can adversely affect the processing performance of others

The decision is driven by whether efficiency or consistency is more important for the private cloud.

The architect could elect to provide different classes of services – one which uses resource reservations to deliver predictability, and another which shares the resources. Separate Resource Pools could be deployed accordingly, along with differential pricing to incent the consumers to exhibit desired behavior.
Resource reservations will not prevent a host from saturating the network and crippling the performance of other hosts. As stated in the Network section earlier, this needs monitoring.

Parent Partition

The parent partition provides the hypervisor with access to physical resources such as network and storage. It also hosts the hypervisor management interfaces. The parent partition needs to be configured identically on all servers in a Resource Pool.

If an architect elects to create a service classification which depends on consuming LUNs directly (not via the parent partition), the parent partition must be configured to present the pass-through for this storage. Further, this storage must be available to all parent partitions in that Resource Pool to enable VM portability between hosts.

The parent partition displays health information for the server, the parent partition operating system, and the hypervisor. The health monitoring system, in turn, consumes this information to enable Capacity Management and Fabric Management.

Management Layers

Task Execution

Task execution is the low level management operations that can be performed on a platform and generally are surfaced through the command line or Application Programming Interface (API). The capability to execute tasks must not only exist but the usage semantics should be consistent across members of a fault domain to enable automation using a common format. When differences in semantics exist this forces the automation layer to compensate for these differences through custom code in the orchestration or even require using different execution hosts or engines within a fault domain.

Automation

The automation layer is made up of the foundational automation technology plus a series of single purpose commands and scripts that perform operations such as starting or stopping a virtual machine, restarting a server, or applying a software update. These atomic units of automation are combined and executed by higher-level management systems. The modularity of this layered approach dramatically simplifies development, debugging, and maintenance.

Orchestration

In much the same way that an enterprise resource planning (ERP) system manages a business process such as order fulfillment and handles exceptions such as inventory shortages, the orchestration layer provides an engine for IT-process automation and workflow. The orchestration layer is the critical interface between the IT organization and its infrastructure and transforms intent into workflow and automation.

Ideally, the orchestration layer provides a graphical user interface in which complex workflows that consist of events and activities across multiple management-system components can be combined, to form an end-to-end IT business process such as automated patch management or automatic power management. The orchestration layer must provide the ability to design, test, implement, and monitor these IT workflows.

Service Management

Service management provides the means for automating and adapting IT service management best practices, such as those found in the IT Infrastructure Library (ITIL), to provide built-in processes for incident resolution, problem resolution, and change control.

Self Service

Self Service capability is a characteristic of private cloud computing and must be present in any implementation. The intent is to permit users to approach a self-service capability and be presented with options available for provisioning in an organization. The capability may be basic where only provisioning of virtual machine with a pre-defined configuration or may be more advanced allowing configuration options to the base configuration and leading up to a platform capability or service.

Self service capability is a critical business driver that enables members of an organization to become more agile in responding to business needs with IT capabilities to meet those needs in a manner that aligns and conforms with internal business IT requirements and governance.

This means the interface between IT and the business are abstracted to simple, well defined and approved set of service options that are presented as a menu in a portal or available from the command line. The business selects these services from the catalog, begins the provisioning process and notified upon completions, the business is then only charged for what they actually use.

This is analogous to capability available on Public Cloud platforms.

The entities that consume self service capabilities in an organization are individual business units, project teams, or any other department in the organization that have a need to provision IT resources. These entities are referred to as Tenants. In a private cloud tenants are granted the ability to provision compute and storage resources as they need them to run their workload. Connectivity to these resources is managed behind the scenes by the fabric management layers of the private cloud.

Tenant administrators are granted access to a self-service portal where they can initiate workflows to provision virtualized services in the appropriate configuration and capacity. For example compute resources may be available in small, medium or large instance capacities and also storage of the appropriate size and performance characteristics. Resources are provisioned without any intervention from infrastructure personnel in IT and the overall progress is tracked and reported by the fabric management layer and reported through the portal.

A chargeback model is defines how tenants will be charged for using the cloud resources. This is typically the numbers and size of resources provisioned times the amount of time they are provisioned for. This information is available to tenant administrators through the self-service portal and well as the ability to provide cost reporting.

Tenants are granted the ability to manage, monitor and report on the resources that they have provisioned.

How To : Use JSON and SAP NetWeaver together

Background

Imagesap2[1]
In this example, SAP is used as the backend data source and the NWGW (Netweaver Gateway) adapter to consumable from .NET client as OData format.

Since the NWGW component is hosted on premise and our .NET client is hosted in Azure, we are consuming this data from Azure through the Service Bus relay. While transferring data from on premise to Azure over SB relay, we are facing performance issues for single user for large volumes of data as well as in relatively small data for concurrent users. So I did some POC for improving performance by consuming the OData service in JSON format.

What I Did?

I’ve created a simple WCF Data Service which has no underlying data source connectivity. In this service when the context is initializing, a list of text messages is generated and exposed as OData.

Here is that simple service code:

[Serializable]
public class Message
{
public int ID { get; set; }
public string MessageText { get; set; }
}
public class MessageService
{
List<Message> _messages = new List<Message>();
public MessageService()
{
for (int i = 0; i < 100; i++)
{
Message msg = new Message
{
ID = i,
MessageText = string.Format(“My Message No. {0}”, i)
};
_messages.Add(msg);

}
}
public IQueryable<Message> Messages
{
get
{
return _messages.AsQueryable<Message>();
}
}
}
[ServiceBehavior(IncludeExceptionDetailInFaults = true)]
public class WcfDataService1 : DataService
{
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
// TODO: set rules to indicate which entity sets
// and service operations are visible, updatable, etc.
// Examples:
config.SetEntitySetAccessRule(“Messages”, EntitySetRights.AllRead);
config.SetServiceOperationAccessRule(“*”, ServiceOperationRights.All);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V3;
}
}
Exposing one endpoint to Azure SB so that client can consume this service through SB endpoint. After hosting the service, I’m able to fetch data by simple OData query from browser.

I’m also able to fetch the data in JSON format.

After that, I create a console client application and consume the service from there.

Sample Client Code

class Program
{
static void Main(string[] args)
{
List lst = new List();

for (int i = 0; i < 100; i++)
{
Thread person = new Thread(new ThreadStart(MyClass.JsonInvokation));
person.Name = string.Format(“person{0}”, i);
lst.Add(person);
Console.WriteLine(“before start of {0}”, person.Name);
person.Start();
//Console.WriteLine(“{0} started”, person.Name);
}
Console.ReadKey();
foreach (var item in lst)
{
item.Abort();
}
}
}

public class MyClass
{
public static void JsonInvokation()
{
string personName = Thread.CurrentThread.Name;
Stopwatch watch = new Stopwatch();
watch.Start();
try
{
SimpleService.MessageService svcJson =
new SimpleService.MessageService(new Uri
(“https://abc.servicebus.windows.net/SimpleService /WcfDataService1”));
svcJson.SendingRequest += svc_SendingRequest;
svcJson.Format.UseJson();
var jdata = svcJson.Messages.ToList();

watch.Stop();
Console.WriteLine(“Person: {0} – JsonTime First Call time: {1}”,
personName, watch.ElapsedMilliseconds);

for (int i = 1; i <= 10; i++)
{
watch.Reset(); watch.Start();
jdata = svcJson.Messages.ToList();
watch.Stop();
Console.WriteLine(“Person: {0} – Json Call {1} time:
{2}”, personName, 1 + i, watch.ElapsedMilliseconds);
}

Console.WriteLine(jdata.Count);
}
catch (Exception ex)
{
Console.WriteLine(personName + “: ” + ex.Message);
}
Thread.Sleep(100);
}

public static void AtomInvokation()
{
string personName = Thread.CurrentThread.Name;

try
{
Stopwatch watch = new Stopwatch();
watch.Start();
SimpleService.MessageService svc =
new SimpleService.MessageService(new Uri
(“https://abc.servicebus.windows.net/SimpleService/WcfDataService1&#8221;));
svc.SendingRequest += svc_SendingRequest;
var data = svc.Messages.ToList();

watch.Stop();
Console.WriteLine(“Person: {0} – XmlTime First Call time: {1}”,
personName, watch.ElapsedMilliseconds);

for (int i = 1; i <= 10; i++)
{
watch.Reset(); watch.Start();
data = svc.Messages.ToList();
watch.Stop();
Console.WriteLine(“Person: {0} – Xml Call {1} time:
{2}”, personName, 1 + i, watch.ElapsedMilliseconds);
}

Console.WriteLine(data.Count);
}
catch (Exception ex)
{
Console.WriteLine(personName + “: ” + ex.Message);
}
Thread.Sleep(100);
}
}9pt;”>

 

What I Test After That
I tested two separate scenarios:

Scenario I: Single user with small and large volume of data
Measuring the data transfer time periodically in XML format and then JSON format. You might notice that first call I’ve printed separately in each screen shot as it is taking additional time to connect to SB endpoint. In the first call, the secret key authentication is happening.

Small data set (array size 10): consume in XML format.

 

Consume in JSON format:

 

For small set of data, Json and XML response time over service bus relay is almost same.

Consuming Large volume of data (Array Size 100)

 

Here the XML message size is around 51 KB. Now I’m going to consume the same list of data (Array size 100) in JSON format.

 

So from the above test scenario, it is very clear that JSON response time is much faster than XML response time and the reason for that is message size. In this test, when I’m getting the list of 100 records in XML format message size is 51.2 KB but JSON message size is 4.4 KB.

Scenario II: 100 Concurrent user with large volume of data (array size 100)
In this concurrent user load test, I’ve done any service throttling or max concurrent connection configuration.

 

In the above screen shot, you will find some time out error that I’m getting in XML response. And it is happening due to high response time over relay. But when I execute the same test with JSON response, I found the response time is quite stable and faster than XML response and I’m not getting any time out.

 

How Easy to Use UseJson()
If you are using WCF Data Service 5.3 and above and VS2012 update 3, then to consume the JSON structure from the client, I have to instantiate the proxy / context with .Format.UseJson().

Here you don’t need to load the Edmx structure separately by writing any custom code. .NET CodeGen will generate that code when you add the service reference.

 

But if that code is not generated from your environment, then you have to write a few lines of code to load the edmx and use it as .Format.UseJson(LoadEdmx());

Sample Code for Loading Edmx

public static IEdmModel LoadEdmx(string srvName)
{
string executionPath = Directory.GetCurrentDirectory();
DirectoryInfo di = new DirectoryInfo(executionPath).Parent;
var parent1 = di.Parent;
var srv = parent1.GetDirectories(“Service References\\” +
srvName)[0].GetFiles(“service.edmx”)[0].FullName;

XmlDocument doc = new XmlDocument();
doc.Load(srv);
var xmlreader = XmlReader.Create(new StringReader(doc.DocumentElement.OuterXml));

IEdmModel edmModel = EdmxReader.Parse(xmlreader);
return edmModel;
}

How To : Use SharePoint Dashboards & MSRS Reports for your Agile Development Life Cycle

The Problem We Solve

Agile BI is not a term many would associate with MSRS Reports and SharePoint Dashboards. While many organizations first turn to the Microsoft BI stack because of its familiarity, stitching together Microsoft’s patchwork of SharePoint, SQL Server, SSAS, MSRS, and Office creates administrative headaches and requires considerable time spent integrating and writing custom code.

This Showcase outlines the ease of accomplishing three of the most fundamental BI tasks with LogiXML technology as compared to MSRS and SharePoint:

  • Building a dashboard with multiple data sources
  • Creating interactive reports that reduce the load on IT by providing users self-service
  • Integrating disparate data sources

Read below to learn how an agile BI methodology can make your life much easier when it comes to dashboards and reports. Don’t feel like reading?

Building a Dashboard with LogiXML vs. MSRS + SharePoint

Microsoft’s only solution for dashboards is to either write your own code from scratch, manipulate SharePoint to serve a purpose for which it wasn’t initially designed, or look to third party apps. Below are some of the limitations to Microsoft’s approach to dashboards:

  • Limited Pre-Built Elements: Microsoft components come with only limited libraries of pre-built elements. In addition to actual development work, you will need to come up with an idea of how everything will work together. This necessitates becoming familiar with best practices in dashboards and reporting.
  • Sophisticated Development Expertise Required: While Microsoft components provide basic capabilities, anything more sophisticated is development resource-intensive and requires you to take on design, execution, and delivery. Any complex report visualizations and logic, such as interactive filters, must be written in code by the developer.
  • Limited Charts and Visualizations: Microsoft has a smaller sub-set of charts and visualization tools. If you want access to the complete library of .NET-capable charts, you’ll still need to OEM another charting solution at additional expense.
  • Lack of Integrated Workflow: Microsoft does not include workflow features sets out of the box in their BI offering.

LogiXML technology is centered on Logi Studio: an elemental, agile BI design environment which lets you simply choose from hundreds of powerful and configurable pre-built elements. Logi’s pre-built elements equip developers with tools to speed development, as well as the processes and logic required to build and manage BI projects. Below is a screen shot of the Logi Studio while building new dashboards.

agile-bi.jpg

Start a free LogiXML trial now.

Logi developers can easily create static or user-customizable dashboards using the Dashboard element. A dashboard is a collection of panels containing Logi reports, which in turn contain table, charts, images, etc. At runtime, the user can customize the dashboard by rearranging these panels on the browser page, by showing or hiding them, and even by changing their contents using adjustable reporting criteria. The data displayed within the panels can be configured, as in any Logi report, to link to other reports, providing drill-down functionality.

 

logi2.jpg

The dashboard displayed above has tabs and user customization enabled. The Dashboard element provides customization features, such as drag-and-drop panel positioning, support for built-in parameters the user can access to adjust the panel’s data contents, and a panel selection list that determines which panels will be displayed. AJAX techniques are utilized for web server interactions, allowing selective updates of portions of the dashboard. Dashboard customizations can be saved on an individual-user basis to create a highly personalized view of the data.

The Dashboard Wizard

The ‘Create a Dashboard’ wizard assists developers in creating dashboards by populating the report definition with the necessary dashboard-related elements. You can easily point to any data source by selecting from a variety of DataLayer types, including SQL, StoredProcedures, Web Services, Files, and more. A simple to use drag and drop SQL Query builder is also integrated, to offer a guided approach to constructing queries when connecting to your database.

logi3.jpg

Using the Dashboard Element

The Dashboard element is used to create the top level structure for all of your interactive panels within the final output. Under your dashboards, you can optionally add any number of Dashboard Panels, Panel Parameters for dynamic filtering, and even automatic refresh features with AJAX-based refresh timers.

logi4.jpg

Changing Appearance Using Themes and Style Sheets

The appearance of a dashboard can be changed easily by assigning a theme to your report. In addition, or as an alternative, you can change dashboard appearance using style. The Dashboard element has its own Cascading Style Sheet (CSS) file containing predefined classes that affect the display colors, font sizes, button labels, and spacing seen when the dashboard is displayed. You can override these classes by adding classes with the same name to your own style sheet file.

See us build a BI app with 3 data sources in under 10 minutes.

Ad Hoc Reporting Creation with LogiXML: Analysis Grid

The Analysis Grid is a managed reporting feature giving end users virtual ad hoc capability. It is an easy to use tool that allows business users to analyze and manipulate data and outputs in multiple and powerful ways.

logi5.jpg

Start a free LogiXML trial now.

Create an Analysis Grid by using the “Create Analysis Grid” wizard, or by simply adding the AnalysisGrid element into your definition file. Like the dashboard, data for the Analysis Grid can be accessed from any of the data options, including SQL databases, web sources, or files. You also have the option to launch the interactive query builder wizard for easy, drag-drop, SQL query creation.

The Analysis Grid is composed of three main parts: the data grid itself, i.e. a table of data to be analyzed; various action buttons at the top, allowing the user to perform actions such as create new columns with custom calculations, sort columns, add charts, and perform aggregations; and the ability to export the grid to Excel, CSV, or PDF format.

The Analysis Grid makes it easy to perform what-if analyses through features like filtering. The Grid also makes data-presentation impactful through visualization features including data driven color formatting, inline gauges, and custom formula creation.

Ad Hoc Reporting Creation with Microsoft

While simple ad hoc capabilities, such as enabling the selection of parameters like date ranges, can be accomplished quickly and easily with Microsoft, more sophisticated ad hoc analysis is challenging due to the following shortcomings.

Platform Integration Problems

Microsoft BI strategy is not unified and is strongly tied to SQL Server. To obtain analysis capabilities, you must build cubes through to the Analysis Service, which is a separate product with its own different security architecture. Next, you will need to build reports that talk to SQL server, also using separate products.

Dashboards require a SharePoint portal which is, again, a separate product with separate requirements and licensing. If you don’t use this, you must completely code your dashboards from scratch. Unfortunately, Microsoft Reporting Services doesn’t play well with Analysis Services or SharePoint since these were built on different technologies.

SharePoint itself offers an out of the box portal and dashboard solution but unfortunately with a number of significant shortcomings. SharePoint was designed as a document management and collaboration tool as opposed to an interactive BI dashboard solution. Therefore, in order to have a dashboard solution optimized for BI, reporting, and interactivity you are faced with two options:

  • Build it yourself using .NET and a combination of third party components
  • Buy a separate third party product

Many IT professionals find these to be rather unappealing options, since they require evaluating a new product or components, and/or a lot of work to build and make sure it integrates with the rest of the Microsoft stack.

Additionally, while SQL Server and other products support different types of security architectures, Analysis Services only has support for using integrated Windows NT security models to access cubes and therefore creates integration challenges.

Moreover, for client/ad hoc tools, you need Report Writer, a desktop product, or Excel – another desktop application. In addition to requiring separate licenses, these products don’t even talk to one another in the same ways, as they were built by different companies and subsequently acquired by Microsoft.

Each product requires a separate and often disconnected development environment with different design and administration features. Therefore to manage Microsoft BI, you must have all of these development environments available and know how to use them all.

Integration of Various Data Sources: LogiXML vs. Microsoft

LogiXML is data neutral, allowing you to easily connect to all of your organization’s data spread across multiple applications and databases. You can connect with any data source or data model and even combine data sources such as current data accessed through a web service with past data in spreadsheets.

Integration of Various Data Sources with Microsoft

Working with Microsoft components for BI means you will be faced with the challenge of limited support for non-Microsoft based databases and outside data sources. The Microsoft BI stack is centered on SQL Server databases and therefore the data source is optimized to work with SQL Server. Unfortunately, if you need outside content it can be very difficult to integrate.

Finally, Microsoft BI tools are designed with the total Microsoft experience in mind and are therefore optimized for Internet Explorer. While other browsers and devices might be useable, the experience isn’t optimized and may potentially lack in features or visualize differently.

 

Free & Licensed Windows 8, Azure, Office 365, SharePoint On-Premise and Online Tools, Web Parts, Apps available.
For more detail visit https://sharepointsamurai.wordpress.com or contact me at tomas.floyd@outlook.com

Resource – Office 365 Powershell Commandlets

Before you can start working with the SharePoint Online cmdlets you must first download those cmdlets. Having the cmdlets as a separate download (separate from SharePoint on-premises that is) allows you to use any machine to run the cmdlets.

blog-office365

 

All we have to do is make sure we have PowerShell V3 installed along with the .NET Framework v4 or better (required by PowerShell V3). With these prerequisites in place simply download and install the cmdlets from Microsoft: http://www.microsoft.com/en-us/download/details.aspx?id=35588.

Once installed open the SharePoint Online Management Shell by clicking Start > All Programs > SharePoint Online Management Shell > SharePoint Online Management Shell.

Just like with the SharePoint Management Shell for on-premises deployments the SharePoint Online Management Shell is just a standard PowerShell window. You can see this by looking at the target attribute of the shortcut properties:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoExit -Command “Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking;”

As you can see from the shortcut, a PowerShell module is loaded: Microsoft.Online.SharePoint.PowerShell. Unlike with SharePoint on-premises, this is not a snap-in but a module, which is basically the new, better way of loading cmdlets. The nice thing about this is that, like with the snap-in, you can load the module in any PowerShell window and are not limited to using the SharePoint Online Management Shell.

(The -DisableNameChecking parameter of the Import-Module cmdlet simply tells PowerShell to not bother checking for valid verbs used by the loaded cmdlets and avoids warnings that are generated by the fact that the module does use an invalid verb – specifically, Upgrade). Note that unlike with the snap-in, there’s no need to specify the threading options because the cmdlets don’t use any unmanaged resources which need disposal.

Getting Connected

Now that you’ve got the SharePoint Online Management Shell installed you are now ready to connect to your tenant administration site. This initial connection is necessary to establish a connection context which stores the URL of the tenant administration site and the credentials used to connect to the site. To establish the connection use the Connect-SPOService cmdlet:

Connect-SPOService -Url https://contoso-admin.sharepoint.com -Credential gary@contoso.com

 

Running this cmdlet basically just stores a Microsoft.SharePoint.Client.ClientContext object in an internal static variable (or a sub-classed version of it at least). Future cmdlet calls then use this object to connect to the site, thereby negating the need to constantly provide the URL and credentials. (The downside of this object being internal is that we can’t extend the cmdlets to add our own, unless we want to use reflection which would be unsupported). To clear this internal variable (and make the session secure against other code that may attempt to use it) you can run the Disconnect-SPOService cmdlet. This cmdlet takes no parameters.

One tip to help make loading the module and then connecting to the site a tad bit easier would be to encapsulate the commands into a single helper method. In the following example I created a simple helper method named Connect-SPOSite which takes in the user and the tenant administration site to connect to, however, I default those values so that I only have to provide the password when I wish to get connected. I then put this method in my profile file (which you can edit by typing “ise $profile.CurrentUsersAllHosts”):

function Connect-SPOSite() {

    param (

        $user = “gary@contoso.com”,

        $site = https://contoso-admin.sharepoint.com&#8221;

    )

    if ((Get-Module Microsoft.Online.SharePoint.PowerShell).Count -eq 0) {

        Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking

    }

    $cred = Get-Credential $user

    Connect-SPOService -Url $site -Credential $cred

}

 

SPO Cmdlets

Now that you’re connected you can finally do something interesting. First let’s look at the cmdlets that are available. There are currently only 30 cmdlets available to us and you can see the list of those cmdlets by typing “Get-Command -Module Microsoft.Online.SharePoint.PowerShell”. Note that all of the cmdlets will have a noun which starts with “SPO”. The following is a list of all the available cmdlets:

  • Site Groups
  • Users
    • Add-SPOUser – Add a user to an existing Site Collection Site Group.
    • Get-SPOUser – Get an existing user.
    • Remove-SPOUser – Remove an existing user from the Site Collection or from an existing Site Collection Group.
    • Set-SPOUser – Set whether an existing Site Collection user is a Site Collection administrator or not.
    • Get-SPOExternalUser – Returns external users from the tenant’s folder.
    • Remove-SPOExternalUser – Removes a collection of external users from the tenancy’s folder.
  • Site Collections
    • Get-SPOSite – Retrieve an existing Site Collection.
    • New-SPOSite – Create a new Site Collection.
    • Remove-SPOSite – Move an existing Site Collection to the recycle bin.
    • Repair-SPOSite – If any failed Site Collection scoped health check rules can perform an automatic repair then initiate the repair.
    • Set-SPOSite – Set the Owner, Title, Storage Quota, Storage Quota Warning Level, Resource Quota, Resource Quota Warning Level, Locale ID, and/or whether the Site Collection allows Self Service Upgrade.
    • Test-SPOSite – Run all Site Collection health check rules against the specified Site Collection.
    • Upgrade-SPOSite – Upgrade the Site Collection. This can do a build-to-build (e.g., RTM to SP1) upgrade or a version-to-version (e.g., 2010 to 2013) upgrade. Use the -VersionUpgrade parameter for a version-to-version upgrade.
    • Get-SPODeletedSite – Get a Site Collection from the recycle bin.
    • Remove-SPODeletedSite – Remove a Site Collection from the recycle bin (permanently deletes it).
    • Restore-SPODeletedSite – Restores an item from the recycle bin.
    • Request-SPOUpgradeEvaluationSite  – Creates a copy of the specified Site Collection and performs an upgrade on that copy.
    • Get-SPOWebTemplate – Get a list of all available web templates.
  • Tenants
    • Get-SPOTenant – Retrieves information about the subscription tenant. This includes the Storage Quota size, Storage Quota Allocated (used), Resource Quota size, Resource Quota Allocated (used), Compatibility Range (14-14, 14-15, or 15-15), whether External Services are enabled, and the No Access Redirect URL.
    • Get-SPOTenantLogEntry – Retrieves company logs (as of B2 only BCS logs are available).
    • Get-SPOTenantLogLastAvailableTimeInUtc – Returns the time when the logs are collected.
    • Set-SPOTenant – Sets the Minimum and Maximum Compatibility Level, whether External Services are enabled, and the No Access Redirect URL.
  • Apps
  • Connections

It’s important to understand that when working with all of the cmdlets which retrieve an object you will only ever be getting a simple data object which has no ability to act upon the source object. For example, the Get-SPOSite cmdlet returns an SPOSite object which has no methods and, though some properties do have a setter, they are completely useless and the object and its properties are not used by any other cmdlet (such as Set-SPOSite). This also means that there is no ability to access child objects (such as SPWeb or SPList items, to name just a couple).

The other thing to note is the lack of cmdlets for items at a lower scope than the Site Collection. Specifically there is no Get-SPOWeb or Get-SPOList cmdlet or anything of the sort. This can be potentially be quite limiting for most real world uses of PowerShell and, in my opinion, limit the usefulness of these new cmdlets to just the initial setup of a subscription and not the long-term maintenance of the subscription.

In the following examples I’ll walk through some examples of just a few of the more common cmdlets so that you can get an idea of the general usage of them.

Get a Site Collection

To see the list of Site Collections associated with a subscription or to see the details for a specific Site Collection use the Get-SPOSite cmdlet. This cmdlet has two parameter sets:

Get-SPOSite [[-Identity] <SpoSitePipeBind>] [-Limit <string>] [-Detailed] [<CommonParameters>]

Get-SPOSite [-Filter <string>] [-Limit <string>] [-Detailed] [<CommonParameters>]

The parameter that you’ll want to pay the most attention to is the -Detailed parameter. If this optional switch parameter is omitted then the SPOSite objects that will be returned will only have their properties partially set. Now you might think that this is in order to reduce the traffic between the server and the client, however, all the properties are still sent over the wire, they simply have default values for everything other than a couple core properties (so I would assume the only performance improvement would be in the query on the server). You can see the difference in the values that are returned by looking at a Site Collection with and without the details:

PS C:\> Get-SPOSite https://contoso.sharepoint.com/ | select *

LastContentModifiedDate   : 1/1/0001 12:00:00 AM
Status                    : Active
ResourceUsageCurrent      : 0
ResourceUsageAverage      : 0
StorageUsageCurrent       : 0
LockIssue                 :
WebsCount                 : 0
CompatibilityLevel        : 0
Url                       :
https://contoso.sharepoint.com/
LocaleId                  : 1033
LockState                 : Unlock
Owner                     :
StorageQuota              : 1000
StorageQuotaWarningLevel  : 0
ResourceQuota             : 300
ResourceQuotaWarningLevel : 255
Template                  : EHS#1
Title                     :
AllowSelfServiceUpgrade   : False

PS C:\> Get-SPOSite https://contoso.sharepoint.com/ -Detailed | select *

LastContentModifiedDate   : 11/2/2012 4:58:50 AM
Status                    : Active
ResourceUsageCurrent      : 0
ResourceUsageAverage      : 0
StorageUsageCurrent       : 1
LockIssue                 :
WebsCount                 : 1
CompatibilityLevel        : 15
Url                       :
https://contoso.sharepoint.com/
LocaleId                  : 1033
LockState                 : Unlock
Owner                     : s-1-5-21-3176901541-3072848581-1985638908-189897
StorageQuota              : 1000
StorageQuotaWarningLevel  : 0
ResourceQuota             : 300
ResourceQuotaWarningLevel : 255
Template                  : STS#0
Title                     : Contoso Team Site
AllowSelfServiceUpgrade   : True

Create a Site Collection

When we’re ready to create a Site Collection we can use the New-SPOSite cmdlet. This cmdlet is very similar to the New-SPSite cmdlet that we have for on-premises deployments. The following shows the syntax for the cmdlet:

New-SPOSite [-Url] <UrlCmdletPipeBind> -Owner <string> -StorageQuota <long> [-Title <string>] [-Template <string>] [-LocaleId <uint32>] [-CompatibilityLevel <int>] [-ResourceQuota <double>] [-TimeZoneId <int>] [-NoWait] [<CommonParameters>]

The following example demonstrates how we would call the cmdlet to create a new Site Collection called “Test”:

New-SPOSite -Url https://contoso.sharepoint.com/sites/Test -Title “Test” -Owner “gary@contoso.com” -Template “STS#0” -TimeZoneId 10 -StorageQuota 100

 

Note that the cmdlet also takes in a -NoWait parameter; this parameter tells the cmdlet to return immediately and not wait for the creation of the Site Collection to complete. If not specified then the cmdlet will poll the environment until it indicates that the Site Collection has been created. Using the -NoWait parameter is useful, however, when creating batches of Site Collections thereby allowing the operations to run asynchronously.

One issue you might bump into is in knowing which templates are available for your use. In the preceding example we are using the “STS#0” template, however, there are other templates available for our use and we can discover them using the Get-SPOWebTemplate cmdlet, as shown below:

PS C:\> Get-SPOWebTemplate

Name                     Title                         LocaleId  CompatibilityLevel
—-                     —–                         ——–  ——————
STS#0                    Team Site                         1033                  15
BLOG#0                   Blog                              1033                  15
BDR#0                    Document Center                   1033                  15
DEV#0                    Developer Site                    1033                  15
DOCMARKETPLACESITE#0     Academic Library                  1033                  15
OFFILE#1                 Records Center                    1033                  15
EHS#1                    Team Site – SharePoint Onl…     1033                  15
BICenterSite#0           Business Intelligence Center      1033                  15
SRCHCEN#0                Enterprise Search Center          1033                  15
BLANKINTERNETCONTAINER#0 Publishing Portal                 1033                  15
ENTERWIKI#0              Enterprise Wiki                   1033                  15
PROJECTSITE#0            Project Site                      1033                  15
COMMUNITY#0              Community Site                    1033                  15
COMMUNITYPORTAL#0        Community Portal                  1033                  15
SRCHCENTERLITE#0         Basic Search Center               1033                  15
visprus#0                Visio Process Repository          1033                  15

Give Access to a Site Collection

Once your Site Collection has been created you may wish to grant users access to the Site Collection. First you may want to create a new SharePoint group (if an appropriate one is not already present) and then you may want to add users to that group (or an existing one). To accomplish these tasks you use the New-SPOSiteGroup cmdlet and the Add-SPOUser cmdlet, respectively.

Looking at the New-SPOSiteGroup cmdlet you can see that it takes only three parameters, the name of the group to create, the permissions to add to the group, and the Site Collection within which to create the group:

New-SPOSiteGroup [-Site] <SpoSitePipeBind> [-Group] <string> [-PermissionLevels] <string[]> [<CommonParameters>]

In the following example I’m creating a new group named “Designers” and giving it the “Design” permission:

$site = Get-SPOSite https://contoso.sharepoint.com/sites/Test -Detailed

$group = New-SPOSiteGroup -Site $site -Group “Designers” -PermissionLevels “Design“

(Note that I’m seeing the Site Collection to a variable just to keep the commands a little shorter, you could just as easily provide the string URL directly).

Once the group is created we can then use the Add-SPOUser cmdlet to add a user to the group. Like the New-SPOSiteGroup cmdlet this cmdlet takes three parameters:

Add-SPOUser [-Site] <SpoSitePipeBind> [-LoginName] <string> [-Group] <string> [<CommonParameters>]

In the following example I’m adding a new user to the previously created group:

Add-SPOUser -Site $site -Group $group.LoginName -LoginName “tessa@contoso.com”

Delete and Recover a Site Collection

If you’ve created a Site Collection that you now wish to delete you can easily accomplish this by using the Remove-SPOSite cmdlet. When this cmdlet finishes the Site Collection will have been moved to the recycle bin and not actually deleted.

If you wish to permanently delete the Site Collection (and thus remove it from the recycle bin) then you must use the Remove-SPODeletedSite cmdlet. So to do a permanent delete it’s actually a two step process, as shown in the example below where I first move the “Test” Site Collection to the recycle bin and then delete it from the recycle bin:

Remove-SPOSite http://contoso.sharepoint.com/sites/test&#8221; -Confirm:$false

Remove-SPODeletedSite -Identity http://contoso.sharepoint.com/sites/test&#8221; -Confirm:$false

 

If you decide that you’d actually like to restore the Site Collection from the recycle bin you can simply use the Restore-SPODeletedSite cmdlet:

Restore-SPODeletedSite http://contoso.sharepoint.com/sites/test

Both the Remove-SPOSite and the Restore-SPODeletedSite cmdlets accept a –NoWait parameter which you can provide to tell the cmdlet to return immediately.

Parting Thoughts

There are obviously many other cmdlets available to explore (per the previous list), however, I hope that in the simple samples shown in this article you will find that working with the cmdlets is quite easy and fairly intuitive.

The key thing to remember is that you are working in a stateless environment so changes to an object such as SPOSite will not affect the actual Site Collection in any way and cmdlets like the Set-SPOSite cmdlet will not honor changes made to the properties as it will use nothing more than the URL property to know which Site Collection you are updating.

Though the existence of these cmdlets is definitely a good start and absolutely better than nothing, I have to say that I’m extraordinarily displeased with the number of available cmdlets and with how the module was implemented.

My biggest gripe is that the module is not extensible in any way so if I wish to add cmdlets for the management of SPWeb objects or SPList objects I’d have to create a whole new framework which would require an additional login as I wouldn’t be able to leverage the context object created by Connect-SPOService cmdlet.

This results in a severely limiting product that prevents community and ISV generated solutions from “fitting in” to the existing model. Perhaps one day I’ll create my own set of cmdlets to show Microsoft how it should have been done…perhaps one day I’ll have time for such frivolities :) .

 

Select Master Page App for SharePoint 2013 now available!! (Get the SharePoint 2010 Select Master Page Web Part Free)

In Publishing sites, there will be a layouts or application page through which we can set a custom
or another master page as a default master page. Unfortunately, this is missing in Team Sites.

This is what this solution is all about. It is targeted mainly for Team sites, since publishing sites already have a provision.

It adds a custom ribbon button in the Share and Track group of the Files group of Master Page Gallery. This is a SharePoint 2013 Hosted App. Refer the documentation for the technical details.

 

The following screen shots depict the functionality.







 

The custom ribbon button will not be enabled if a folder is selected or more than 1 item is selected.
But if a file is selected, the button will be enabled, irrespective of the file extension. Upon selecting a file and clicking on the ribbon button, a pop up dialog will appear with the text “Working on it..”.

Then a confirmation alert will appear, asking “Are you sure?”. Once confirmed by the user, a progress message will be displayed in the pop up dialog. If the file selected is not of .master extension, then the user will be displayed an alert “This will work only for master pages.”.

If a master page, which is already set as default, is selected and the ribbon button is clicked, the user will be displayed an alert “The file at <url> is the current default master page. So please select another master page.”. If another master page is selected, then the user will be displayed an alert “Master Page Changed Successfully.

Please press CTRL + F5 for changes to reflect.”. Once the user clicks OK on the alert, the pop up dialog also closes and pressing CTRL + F5 will reflect the updated master page. Any time, the user clicks OK or cancel on the alert screens, the parent screen will be refreshed and the current selection will be cleared.

The app requires a Full Control on the host web, since this is required for setting the master page and thats precisely the reason why, I couldn’t publish this in the Office store.

The app has been tested on IE9 and the latest version of Chrome and Firefox. It may not work on IE8 or lower version of other browsers also, in case they don’t support HTML5. Also, the app currently supports only English. Also, the app will set the default master only on the host web (where the app is installed) and not on the sub webs.

The app uses jQuery AJAX and REST APIs of SharePoint 2013.

To use the app, just upload the app (.app file) to the App Catalog and add/install it to the host team site and trust it and navigate to the Master Page Gallery and you are good to go.

 

With this App, you will also receive the FREE SharePoint 2010 Select Master Page Web Part!!

It adds a custom ribbon button in the Share and Track group of the Documents group of Master Page Gallery.

It is a Sandbox solution and it is implemented to set the master of only the root site of a site collection, though it can be customized / extended for sub sites. It requires a user to be at least a Site owner to avoid unnecessary manipulation of master page by contributors or other users. Refer the documentation for the technical details.

The following screen shots depict the functionality.





 

 

How To : Setup MyTask List in SharePoint 2013

Overview

You are using SharePoint 2013, you have deployed My Sites. You or your users have tasks assigned. But when you or your users visit their MySite, they see below screen. Despite the users having assigned tasks elsewhere in the system, MySite still shows no tasks which is incorrect.

123

 

What is My Task List in SharePoint 2013?

By architecture of the Newsfeed site on SharePoint 2013, My Tasks list puts together and shows all the SharePoint and Project Server (if installed) task assignment right into the users My Site page. The tasks can be either private tasks or public tasks.

Pre-requisites for proper sync of My Task?

  • Search Service Application – very important to have this service enabled and running. Aggregator checks every 3 hours for any new “Tasks Lists”. Though the aggregator would look for SharePoint events / hints, they are known to have not activated an aggregation and hence the importance given to the indexer. Very important to have an Incremental / Continuous Crawl running.
  • Work Management Service Application (WMA) and the service running on the server.
  • User Profile Synchronization Service

Refreshing the My Tasks Page

The code behind aggregator is triggered by simply visiting the page within Newsfeed Site as long as the last trigger was older than 5 minutes. This delay is to preserve the performance of the SharePoint farm. This can be changed using PowerShell but highly recommend against the same for large farm deployments.

Possible problems causing sync not work?

  1. Work Management Service wasn’t running
  2. Search wasn’t indexing anything yet. No indexer meant aggregator could potentially be not performing any aggregation as well.

1234

Solution

  1. Work management Service should run on App Server. If required create one from Central Admin
  2. Work management service application should be created with an app pool which must run with profile app pool account
  3. Create/ensure Incremental Crawls to happen across all the content sources, setup people search, my sites search.
  4. Ensure that continuous crawl is running
  5. Wait till the crawl completes
  6. Review the permission of profile app pool and portal app pool account on the specific databases with dbowner permissions
  • social db
  • sync db
  • profile db
  • state service db
  • manage metadata db
  • my site db
  • portal content db
  • projects content db
  • teams content db
  • communities content db
  • Search db.
  1. User profile synchronization service should be running.
  2. Run IIS reset on all app and WFE servers at the same time.

12345

NEW “Filter My Lists” Web Part now available + FREE Metro UI Master Page when ordering

“Filter My Lists” Web Part

Saves you time with optimal performance

Find what you are looking for with a few clicks, even in cluttered sites and lists with masses of items and documents.

Find exactly what you need and stop wasting your time browsing SharePoint.
Filter the content of multiple lists and libraries in a single   step.

Combine search and metadata filters

In a single panel combine item, document and attachment searches with metadata keyword searches and managed metadata filters.

Select multiple filter values from drop-down lists or alternatively use the keyword search of metadata fields with the help of wildcard characters and logical operators.

Export filtered views to Excel

Export filtered views and data to Excel. A print view enables you to print your results in a clear printable format with a single  click.

Keep views clear and concise

Provides a complete set of filters without cluttering list views and keeps your list views clear, concise and speedy. Enables you to filter SharePoint using columns which aren’t visible in list views.

Refine filters and save them for future use, whether private, to share with others or to use as default filters.

FREE Metro Style UI Master Page

 

Screen Capture Medium

Modern UI Master Page and Styles for SharePoint 2010.

This will give the Metro/Modern UI styling of SharePoint 2013 to your SharePoint 2010 team sites.

Features include:
– Quick launch styling
– Global navigation and drop-down styling
– Search box styling and layout change
– Web part header styling
– Segoe UI font

All my Web Parts and Apps are now making use of Knockout.JS !! Template also available at very low price!!

After completing the development of my latest Web Part, the “List Search” Web Part I decided to update all my Web Parts and Apps to using Knockout.JS, starting with the “List Search” Web Part.

This topic came up when we I looked at some of my older products that includes generic list and library web parts, that would display few common fields like ID, Title, Description, File Url etc. Prior to this request we solved similar issues with OOB list and library web parts with custom XSLT, by creating Visual Studio web part for branding purposes only, or by using Imtech content query web part( which is XSLT solution by design).

At the end, clients hated XSLT solutions and I hated to create new web part for every new list or library. That’s where Knockout popped. Why don’t we use Knockout for templates instead XSLT.

I’ll assume that whoever reads this article knows about creating a web part for SharePoint, SharePoint module, java script and html and I will not go into details.

Background

A bit about Knockout

From Knockout web site: “Knockout is a JavaScript library that helps you to create rich, responsive display and editor user interfaces with a clean underlying data model. “

From Wikipedia:

Knockout is a standalone JavaScript implementation of the Model-View-ViewModel pattern with templates. The underlying principles are therefore:

  • a clear separation between domain data, view components and data to be displayed
  • the presence of a clearly defined layer of specialized code to manage the relationships between the view components

Knockout includes the following features:

  • Declarative bindings
  • Automatic UI refresh (when the data model’s state changes, the UI updates automatically)
  • Dependency tracking
  • Templating (using a native template engine although other templating engines can be used, such as jquery.tmpl)

So what’s the deal?

First you have your view model:

 var myViewModel = {
     personName: 'Bob',
     personAge: 123
};

Then you have a view:

The name is <span data-bind="text:personName"></span>

At the end just bind your view to model

 ko.applyBindings(myViewModel);

We’ll talk about model later.

Using the code

Proof of concept

I’ve created an html mock of our web part. This is useful, because we can prepare java scripts, css files, models and views in advance and test it without SharePoint and visual studio.

You can download proof of concept as separate download from the link above.

References

There would be only two file references.

One is knockout library itself

<script type='text/javascript' src="http://knockoutjs.com/downloads/knockout-3.0.0.js"></script>

and the other is css file I’ve added to this project

<link href="css/controls.css" rel="stylesheet" type="text/css" />

Model 

I’ve designed model as Item class. Here it is:

// Item class definition
var Item = function (id, title, datecreated,url,description,thumbnail) {
   this.id = id;
   this.title = title;
   this.datecreated = datecreated;
   this.url=url;
   this.description=description;
   this.thumbnail=thumbnail;
}

It’s called item and it has 6 properties:

  1. id – ID of the item
  2. title – Title of the item
  3. datecreated – Creation date of the item
  4. url – Url of the item
  5. description – Description of the item
  6. thumbnail – Thumbnail of the item

 

View model

Here is the view model

function viewModel1 (){
    var self = this;
    self.items =  [  
     new Item(2, 'News1 title','21.10.2013','javascript:OpenDialog(2);'
               ,'Description News 1','img/pic1.jpg'), 
    new Item(1, 'News 2 title','21.02.2013','javascript:OpenDialog(1);',
               'Description News 2','img/pic2.jpg')
}

View model has property items, which in fact is collection of Item objects. For mocking purposes we’ve added two Item objects in this collection (News 1 and News 2);

 

View

Here is the view:

<div class="glwp glwp-central" id="k1">
  <div class="glwpLine"></div>
  <h5><img src="PublishingImages/siteIcon.png" 
          width="28" height="28" align="absmiddle" />
      News</h5>
  <div class="glwpLineGrey"></div>
    <ul data-bind="foreach:items">
      <li>
       <div class="glwpDate"><span data-bind="text: datecreated" ></span>
       <img class="glwpImage" data-bind="attr: { src: thumbnail }" />         
       </div>
       <div class="glwpText glwpText-central" >
        <a data-bind="attr: { href: url, title: title }" style="min-height:70px;">
         <span class="glwpTextTTL" data-bind="text:title"></span><br />
         <span data-bind="text: description"></span>
        </a>
       </div>
       <div class="glwpSep"></div>
      </li>
    </ul>
</div>

What we have here:

It’s pretty simple. We haveunordered list bound to our model. One

  • element would be created for every item of our items collection (data-bind=”foreach: items”).

 

 

Property binding: 

  •  datecreated">< /span> – This is the simplest data binding. It would write datecreated property of Item object to text of span element (like: <span>11/11/2013</span>)
  • <img class="glwpImage" data-bind="attr: { src: thumbnail }" />. This is a bit more complicated binding. It would take thumbnail property of item object and write it to src attribute of img element.
  • 70px;">. It would take url property and write it as href attribute of the a element, and title property as title attribute.
  • <span class="glwpTextTTL" data-bind="text:title"></span>. Title property would be written as text of span element
  • <span data-bind="text: description"></span>. Description property would be written as text of span element

So anyone with little knowledge of html and css can customize this template anyway (s)he likes, as long as (s)he provides required properties.

 

Binding

ko.applyBindings(viewModel1,document.getElementById('k1'));

Note second parameter in applyBindings method. It says document.getElementById('k1'). Same id is on the first div in our view (k1″>). This is helpful if you want to have more than one view model in one page. It tells knockout to bind this specific model (viewModel1) to specific template on our page (k1).

 

What we have from this? We are going to create web part from this code and one of the web part features is that you can put same web part several times on the same page. So it would be possible to put one web part in SharePoint page to display news and one web part to display projects or documents. And they will coexist together.

If you look at the source you will notice that we have 2 view models (viewModel1 and viewModel2) and two templates (k1 and k2), and two bindings of course. One binding is for news (with images and description) and one binding is for files (no images, and no descriptions). Templates are slightly different.

Final result

Here is the final result

SharePoint Part

As I said I will assume that you have some experience with SharePoint development so I will not explain how to create the project and add project items. Project type is standard Visual Studio 2010 SharePoint Empty Project template.

SharePoint part consists of following items:

  • Web part item – KnockoutWp. Standard SharePoint Visual Web part project Item
  • Assets module. SharePoint module project item. We are going to use it for deploying of images and css files (0.png – empty container for images and controls.css – css file for our projects).
  • Layouts mapped folder. We’ll put here editor page for template.

And here is the solution explorer for project:

Assets

We are going to deploy 2 files:

  • 0.png – 1×1 pixel transparent image aka placeholder
  • Controls.css – css file for our template

Both of these items are going to be deployed to Style Library of the SharePoint site collection, so content editors may change it later without need of solution redeployment.

Here is the elements.xml file:

So our assets will end to http://oursitecollectionurl/Style Library/wp folder.

KnockoutWp

This is Visual Studio 2010 Visual Web part.

It is consisted of 4 items:

  • KnockoutWp.cs – web part class
  • KnockoutWpUserControl – User control of our web part
  • KnockoutWp.webpart – web part xml file
  • Elements.xml – manifest file

Properties

Web part has following properties:

  • ListUrl (string, required) – url of the list we are displaying.
  • TitleField (string, optional) – display name of the field that would be displayed as Title. If it’s blank Title field would be used.
  • DateField (string, optional) – display name of the field that would be displayed as date. If it’s blank Created field would be used.
  • DescriptionField (string, optional) – display name of the field that would be displayed as Description. If it’s blank it would be omitted.
  • ImageField (string, optional) – display name of the field that would be displayed as Thumbnail picture. If it’s blank it would be omitted.
  • NoOfItems (int) – how many items from the list would be displayed
  • ItemTemplate (string) – html template of the web part. Defines the look of our web part.
  • WpPosition (enum) – Used for a three column layouts. Web part has styles for three zones: right, central and left. Difference is in width, padding and margin. Everything is set in css so you can accommodate it to your environment.

On picture below you can see mapping between Field properties of web part and list item fields.

 

EditorPart

I’ve added one more thing to this web part it’s EditorPart class GenericListPartEditorPart. I’m not going into deep with editor parts, but here is quick info. When you create public property for a web part it is automatically displayed in web part edit panel.

And it is great concept when you need simple properties as strings, numbers and short lists. If you want more complicated scenario (as we want here for our web part) it’s not enough.

What I wanted here is template editor. It could be reasonably large so idea was to have a button in web part edit panel that would open large dialog window with editor. User would work with our template, click Apply and change ItemTemplate web part property.

Template editor KnockoutWpUserControl

This is user control created by Visual Studio, when we added Visual web part project item to the project. It consists of markup ascx file and code behind .ascx.cs file. We will put our markup and our c# code here.

Markup

Here is the complete markup:

<script type='text/javascript' src="http://knockoutjs.com/downloads/knockout-3.0.0.js">
</script>
<style type="text/css">  @import url("/Style
Library/wp/controls.css");  </style>  
<div class="glwp glwp-<%=PositionClass %>" id="k<%=WpId %>">
  <div class="glwpLine"></div>      
  <h5><img src="<%=Icon %>" width="28" 
    height="28" align="absmiddle"><%=Title %></h5>
    <div class="glwpLineGrey"></div>      
  <asp:Literal ID="LitLayout" runat="server"></asp:Literal>
</div>  

<script type="text/javascript">    
  function OpenDialog(Url) {
    var options = SP.UI.$create_DialogOptions();        
    options.resizable = 1;        
    options.scroll = 1;        
    options.url = Url;
    SP.UI.ModalDialog.showModalDialog(options);    
}         
// Item class         
  var Item = function (id, title, datecreated,url,description,thumbnail) {            
     this.id = id;            
     this.title = title;
     this.datecreated = datecreated;
     this.url=url;
     this.description=description;
     this.thumbnail=thumbnail;
  }         
 //ViewModel goes here (It's created on server)        
 runat="server" ID="LitItems"></asp:Literal>
 
//Function that opens Template editor. Used only in edit mode of web part       
 function portal_openTemplateEditor(wpid) {       
  var val="";              
  var options = SP.UI.$create_DialogOptions();              
  options.width = 600;             
  options.height = 500;                
  options.url = "/_layouts/KnockoutTemplate/TemplateEditor.aspx?c="+wpid;//"";
  options.dialogReturnValueCallback =
           Function.createDelegate(null,portal_openTemplateEditorClosedCallback);
  SP.UI.ModalDialog.showModalDialog(options);
}
</script>

First Section, of the markup (picture below) has script (knockout, on the remote server) and style references (controls.css in local Document library). Below is html markup that defines the container of the web part (top and bottom borders, width, icon and title). Markup is not the cleanest because I was little lazy and left some public properties in it. Note< %=PositionClass%>, <%=WpId%> and so on.

There are all public properties of the user control and they are used for presentation:

  • PositionClass – depending on WpPosition web part property (right, central or left) adds appropriate css class to markup and that way defines width, padding and margin of web part WpId is guid of the web part. It is used to uniquely identify the web part, because we can put several web parts of the same type and everything would crush without this identificator.
  • Icon – is a url to icon that would be displayed on web part. Web part property Title Icon Image URL is used here (this is OOB property)
  • Title –title text of the web part. Text that was entered in the title area of the web part. Web part property Title is used here (this is OOB property)

Last interesting thing here is Literal control LitLayout. This control would hold our ItemTemplate property (html template of our web part).

Second section, is a java script function that opens list item in a dialog window. It is used when underlying list is not document library.

Third section consists of knockout view model (java script). Item class definition is self-explanatory (defines 6 properties only). The rest of the model is created on the server side so now there is only LitItems Literal control there.

Fourth section is just a java script function that is used when editing web part properties. This function opens template editor in dialog window.

Code

Properties:

  • Properties from web part
    • Icon – url to the icon
    • Title – title of the web part
    • ListUrl – url to the list
    • TitleField – Title field in the list
    • DateField – Date field in the list
    • ImageField – Image field in the list
    • DescriptionField – Description field in the list
    • NoOfItems – number of items to return
    • Position – position of the web part (right, left or central)
    • ItemTemplate – html template of the web part
    • WpId – guid id of the web part ·
  • UC’s properties
    • PositionClass – css class based on position
    • ColumnMap – dictionary that holds internal names of the list item fields.

Methods: File has only one method Page_Load. Code is executing with elevated privileges.

In that method we:

  1. Resolve list by the supplied URL (ListUrl property) SPList annList = annWeb.GetList(ListUrl);
  2. Get internal names of the list columns by their Display names SpHelper.GetFieldsInternals(annWeb, annList.Title, TitleField, DateField, DescriptionField, ImageField, columnMap );
  3. Create CAML Query SpHelper.GetGenericQuery(annList, q, NoOfItems);
  4. Execute it
  5. Iterate over SPListItemCollection (coll) and create required JavaScript
Helper class

SPHelper is helper class and you can find it in Helpers directory.

It has 3 responsibilities:

  1. To retrieve List Columns Internal names based on supplied List Columns display names (WP properties – TitleField – Title field, DateField, ImageField , DescriptionField ) – GetFieldsInternals method
  2. To create Caml query for retrieving list items – GetGenericQuery method
  3. To retrieve values from SharePoint columns based on their types – GetFieldValue method

 

SharePoint 2013 – Creating a Word document with OOXML

This solution is based on the SharePoint-hosted app template provided by Visual Studio 2012. The solution enumerates through each document library in the host website, and adds the library to a drop-down list.

2008040211105590dad[1]

 

When the user selects a library and clicks a tile, the app creates a sample Word 2013 document by using OOXML in the selected library.

Prerequisites

This sample requires the following:

  • Visual Studio 2012
  • Office Developer Tools for Visual Studio 2012
  • Either of the following:
    • SharePoint Server 2013 configured to host apps, and with a Developer Site collection already created; or,
    • Access to an Office 365 Developer Site configured to host apps.

Key components of the sample

The sample app contains the following:

  • The Default.aspx webpage, which is used to enumerate through each document library in the host website, and render tiles for each MP4 video in the app.
  • The Point8020Metro.css style sheet (in the CSS folder) which contains some simple styles for rendering tiles.
  • The AppManifest.xml file, which has been edited to specify that the app requests Full Control permissions for the hosting web.
  • References to the DocumentFormat.OpenXml assembly provided by the OpenXML SDK 2.5.

All other files are automatically provided by the Visual Studio project template for apps for SharePoint, and they have not been modified in the development of this sample.

Configure the sample

Follow these steps to configure the sample.

  1. Open the SP_Autohosted_OOXML_cs.sln file using Visual Studio 2012.
  2. In the Properties window, add the full URL to your SharePoint Server 2013 Developer Site collection or Office 365 Developer Site to the Site URL property.

No other configuration is required.

Build the sample

To build the sample, press CTRL+SHIFT+B.

Run and test the sample

To run and test the sample, do the following:

  1. Press F5 to run the app.
  2. Sign in to your SharePoint Server 2013 Developer Site collection or Office 365 Developer Site if you are prompted to do so by the browser.
  3. Trust the app when you are prompted to do so.

The following images illustrate the app. In Figure 1 the app has been trusted and libraries added to the drop-down list.

Figure 1. View of the app with drop-down list

Figure 1

In Figure 2, the user has clicked the orange tile. The document is created and the red tile provides a link to the appropriate library (Figure 3), which the user reaches by clicking on the red tile.

Figure 2. Open XML document creator

Figure 2

Figure 3. Document library

Figure 3

Troubleshooting

Ensure that you have SharePoint Server 2013 configured to host apps (with a Developer Site Collection already created), or that you have signed up for an Office 365 Developer Site configured to host apps.

Change log

First release: January 30, 2013.

Related content

System Center Virtual Machine Manager (VMM) 2012 as Private Cloud Enabler (2/5): Fabric, Oh, Fabric

Aside from public cloud, private cloud, and something in between, the essence of cloud computing is fabric. The 2nd article of this 5-part series is to annotate the concept and methodology of forming a private cloud fabric with VMM 2012. Notice that throughout this article, I use the following pairs of terms interchangeably:

  • Application and service
  • User and consumer

And this series includes:

  • Part 1. Private Cloud Concepts
  • Part 2. Fabric, Oh, Fabric (This article)
  • Part 3. Deployment with Service Template
  • Part 4. Working with Service Templates
  • Part 5. App Controller 

Fabric in Windows Azure Platform: A Simplistic, Yet Remarkable View of Cloud imageIn cloud computing, fabric is a frequently used term. It is nevertheless not a product, nor a packaged solution that we can simply unwrap and deploy. Fabric is an abstraction, an architectural concept, and a state of manageability to conceptually denote the ability to discover, identify, and manage the lifecycle of instances and resources of a service. In an oversimplified analogy, fabric is a collection of hardware, software, wiring, configurations, profiles, instances, diagnostics, connectivity, and everything else that all together form the datacenter(s) where a cloud is running. While Fabric Controller (FC, a terminology coined by Windows Azure Platform) is also an abstraction to signify the ability and designate the authority to manage the fabric in a datacenter and all intendances and associated resources supported by the fabric. As far as a service is concerned, FC is the quintessential owner of fabric, datacenters, and the world, so to speak. Hence, without the need to explain the underlying physical and logical complexities in a datacenter of how hardware is identified and allocated, how a virtual machine (VM) is deployed to and remotely booted form bare-metal, how application code is loaded and initialized, how a service is started and reports its status, how required storage is acquired and allocated, and on and on, we can now summarize the 3,500-step process, for example, to bring up a service instance in Windows Azure Platform by virtually saying that FC deploy a service instance with fabric. Fundamentally a PaaS user expects is a subscribed runtime (or “platform” as preferred) environment is in place so cloud applications can be developed and run. And for an IaaS user, it is the ability to provision and deploy VMs on demand. How a service provider, in a private cloud setting that normally means corporate IT, makes PaaS and IaaS available is not a concern for either user. As a consumer of PaaS or IaaS, this is significantly helpful and allows a user to focus on what one really cares, which is a predictable runtime to develop applications and the ability to provision infrastructure as needed, respectively. In other words, what happens under the hood of cloud computing is collectively abstracted and gracefully presented to users as “fabric.” This simplicity brings so much clarity and elegance by shielding extraordinary, if not chaotic, technical complexities from users. The stunning beauty unveiled by this abstraction is just breathtaking.

Fabric Concept and VMM 2012

imageSimilar to what is in Windows Azure Platform, fabric in VMM 2012 is an abstraction to hide the underlying complexities from users and signify the ability to define and resources pools as a whole. This concept is explicitly presented in the UI of VMM 2012 admin console as shown here on the right. There should be no mystery at all what is fabric of a private cloud in VMM 2012. And a major task in the process of building a private cloud is to define/configure this fabric using VMM 2012 admin console. Specifically, there are 3 definable resource pools:

  • Servers (i.e. Compute)
  • Networking
  • Storage

Clearly the magnitude and complexities are not on the same scale comparing the fabric in Windows Azure Platform in public cloud and that in VMM 2012 in private cloud. Further there are also other implementation details like replicating FC throughout geo-disbursed fabric, etc. not covered here to complicate the FC in Windows Azure Platform even more. The ideas of abstracting those details not relevant to what a user is trying to accomplish are nevertheless very much the same in both technologies. In a sense, VMM 2012 is a FC (in a simplistic form) of the defined fabric consisting of Servers, Networking, and Storage pools. And in these pools, there are functional components and logical constructs to collectively constitute the fabric of a private cloud.

Servers Pool

This pool embodies containers hosting the runtime execution resources of a service. Host groups contains virtualization hosts as the destinations where imagevirtual machines can be deployed based on authorization and service configurations. Library servers are the repositories of building blocks like images, iso files, templates, etc. for composing VMs. To automatically deploy images and boot a VM from bare-metal remotely via networks, pre-boot execution environment (PXE) servers are used to initiate the operating system installation on a physical computer. Update servers like WSUS are for servicing VMs automatically and based on compliance policies. For interoperability, VMM 2012 admin console can add VMware vCenter Servers to enable the management of VMware ESX hosts. And of course, the consoles will have visibility to all authorized VMM servers which forms the backbone of Microsoft virtualization management solution.

Networking Pool

In VMM 2012, the Networking pool is where to define logical networks, assign pools of static IPs and MAC addresses, integrate load balancers, etc. to mash up the fabric. Logical networks are user-defined groupings of IP subnets and VLANs to organize and simplify network assignments. For instance, HIGH, MEDIUM, and LOW can be the definitions of three logical networks such that real-time applications are connected with HIGH and batch processes with LOW based based on specified class of service. Logical networks provide an abstraction of the underlying physical infrastructure and enables an administrator to provision and isolate network traffic cablednetworkbased on selected criteria like connectivity properties, service-level agreements (SLAs), etc. By default, when adding a Hyper-V host to a VMM 2012 server, VMM 2012 automatically creates logical networks that match the first DNS suffix label of the connection-specific DNS suffix on each host network adapter.

In VMM 2012, you can configure static IP address pools and static MAC address pools. This functionality enables you to easily allocate the addresses for Windows-based virtual machines that are running on any managed Hyper-V, VMware ESX or Citrix XenServer host. This feature gives much room for creativities in managing network addresses. VMM 2012 also supports adding hardware load balancers to the VMM console, and creating associated virtual IP (VIP) templates which contains load balancer-related configuration settings for a specific type of network traffic. Those readers with networking or load-balancing interests are highly encouraged to experiment and assess the networking features of VMM 2012.

Storage Pool

With VMM 2012 admin console, an administrator can discover, classify, and provision remote storage on supported storage arrays. VMM 2012 uses the new Microsoft Storage Management Service (installed by default during the installation of VMM 2012) to communicate with external arrays. An administrator must install a supported Storage Management Initiative – Specification (SMI-S) provider on an available server, followed by adding the provider to VMM 2012. SMI-S is a storage standard for operating among heterogeneous storage systems. VMM 2012 automates the assignment of storage to a Hyper-V host or Hyper-V host cluster, and tracks the storage that is managed by VMM.  Notice that storage automation through VMM 2012 is only supported for Hyper-V hosts.

Where There Is A Private Cloud, There Are  IT Pros

Aside from public cloud, private cloud, and something in between, the essence of cloud computing is fabric. And when it comes to a private cloud, it is largely about constructing/configuring fabric. VMM 2012 has laid it all out what fabric is concerning a private cloud and a prescriptive guidance of how to build it by populating the Servers, Networking, and Storage resource pools. I hope it is clear at this time that, particularly for a private cloud, forming fabric is not a programming commission, but one relying much on the experience and expertise of IT pros in building, operating, and maintaining an enterprise infrastructure. It’s about integrating IT tasks of building images, deploying VMs, automating processes, managing certificates, hardening securities, configuring networks, setting IPsec, isolating traffic, walking through traces, tuning performance, subscribing events, shipping logs, restoring tables, etc., etc., etc. with the three resource pools. And yes, it’s about what IT professionals do everyday to keep the system running. And that brings us to one conclusion.

Private cloud is the future of IT pros. And let the truth be told “Where there is a private cloud, there are IT pros.”

– See more at: http://blogs.technet.com/b/yungchou/archive/2011/08/29/system-center-virtual-machine-manager-vmm-2012-as-private-cloud-enabler-2-5-fabric-oh-fabric.aspx#sthash.xG3tXINR.dpuf

Thoughts on : Customizing the Public Website of Office 365

image_thumb

blog-office365

 

Recently, I attempted a migration from my ASP.NET based Azure website to Office 365. The reason was that I wanted to use SharePoint 2013 for in-page editing and simply try to get the platform to take care of all my business needs.

After a few days, I have reverted back to the Azure web host as I am not satisfied that the service will fulfill my requirements. Here is a recollection of my experiences of the shortcomings in the platform and the points that should be addressed.

Master page editing in the public Office 365 site is not much different from the rest of Office 365 and SharePoint 2013. You have access to the Design Manager and you can open the site with SharePoint Designer.

lekman-365

On the up-side, you can create master pages, create page layouts and add Rich Text areas using the “Multi-Area Page” that allows up to four separate rich text areas. I managed to get the site to look virtually the same when published.

On the down-side, the page contained all the scripts and CSS styles from standard SharePoint and caused the responsive design to break for tablets and phones. I could probably have fixed some of the issues but the difference in page and load time is as follows:

Azure .NET Office 365
Total page weight 305.2K 727K
Total non-cached file size 7.2K 54K
Total number of script files 7 12
Average page load time during load test 1.67 sec 3.46s

I then amended the blog layout. The comments feature from blogs in standard SharePoint is not available so it uses Facebook instead. I replaced this with a Disqus control instead. Later on, I started running in to several issues when trying to add features.

Issue #1: You cannot define your own content types

The site administration does not contain a link to allow modification of content types or site fields. Trying to navigate to the URL manually presents you with a 403 error. Adding custom content types for your page layouts seems like a simple request. I then tried to inject these using sandbox solutions.

Issue # 2: Sandboxed solutions are not supported

Yes, this link is also gone. You cannot navigate to “Solutions” but you can manually enter the URL. I found a helpful and informative post by Jason Cribbet on the topic and was able to activate my feature. This is, however, not supported by Microsoft and I am now in “not supported” land with my website.

Issue # 3: You cannot create subsites

I was fairly happy until I started to create more content and restricted areas. There is no way to create subsites using the interface. You need to use SharePoint Designer. Again, this is not supported by Microsoft.

Issue # 4: You cannot control feature activation

Yes, features can not be changed either. This means that you cannot add or remove any functionality outside of apps to the site.

Issue # 5: What is going on with the blog framework and managed navigation?

I could live with the “hacks” and continued to style the blog area. This, in itself, has a number of very strange issues:

  • If you remove the “Blog Tools” web part from the page then the links to blog posts will not work.
  • The pages does not seem to understand changing page layouts. I first had to change the page layout, then disconnect the page from the layout in SharePoint Designer.
  • Managed navigation allows you to use the blog as “/Blog/Post/1/My-Blog-Title” and “/Blog/Date/2013/” etc. The page configuration, however, does not allow to be changed. If you rename a page then the entire navigation framework will stop working. Just don’t.
  • The blog and blog category lists can still be accessed using the forms URL at “/Lists/Posts/AllItems.aspx” and you cannot change the anonymous behavior. As you cannot change features, then the lockdown feature is out of bounds. I guess you can inject redirects on the pages or try to use PowerShell to reactivate the forms lockdown page feature but I did not attempt this.

Issue # 6: You cannot recreate the site

So finally, you have hacked this puppy to pieces. You want to recreate the site, you go into SharePoint administration for Office 365 and delete the site collection. But wait… there is no option to recreate the site? This rectified itself on my test tenant after 24 hours and allowed me to create the public site. It did, however, not fully recreate. Now the site has no web template applied and I get the error message “Sorry, something went wrong: There is no site in the current site subscription matching the HiddenSiteSelection control’s value.”.

Summary

Office 365 has a long way to go before it can offer any kind of enterprise solutions for public web. And in a sense, it seems that they are just about there but have intentionally limited themselves to support basic usage only. But if that was the case, why allow SharePoint Designer and Design Manager access at all?

I hope that the public website will be improved in upcoming releases and would really like to run my site and blog using SharePoint technology.

Visio for Developers in Office 365

In this post, I’ll introduce some of the new features of interest to developers in Visio 2013. Among these features are:

  • New file format
  • Robust updates to themes
  • The change shape feature (that allows you to replace one shape with another while Maintaining shape text)
  • New shape effects
  • Improvements to commenting
  • Coauthoring on SharePoint Server 2013
  • Customizable image clipping
  • Relative geometry
  • Support for Business Connectivity Services (BCS) data
  • Updates to Visio Services in Microsoft SharePoint Server 2013
  • Duplicate page feature

At the end of the post, I provide you with some additional resources for both Visio and general Office development.

 

New file format

Visio 2013 introduces a new file format, based on the Open Packaging Conventions (OPC) standard (ISO 29500, Part 2) and the XML elements from the previous Visio XML file format (.vdx). It is a zipped, XML-based file format similar to the file formats used in other applications.

Because the new file format is supported by both Visio 2013 and Visio Services in Microsoft SharePoint Server 2013, you can save a Visio drawing directly to a SharePoint Server library without having to first publish the file as a Visio Web Drawing (.vdw). Even so, Visio Services can still read and display Visio Web Drawing files.

The new file format includes the following file types (by extension):

  • .vsdx (Visio drawing)
  • .vsdm (Visio macro-enabled drawing)
  • .vssx (Visio stencil)
  • .vssm (Visio macro-enabled stencil)
  • .vstx (Visio template)
  • .vstm (Visio macro-enabled template)

By using existing support for reading and writing to the file format package (such as System.IO.Packaging) and for parsing XML (System.Xml.Linq), you can work with the new file formats programmatically.

Visio 2013 retains the ability to read the old file formats (.vsd, .vss, .vst, .vdx, .vsx, .vtx, .vdw, .vwi). Visio 2013 does not save to the previous Visio XML file format (.vdx). Solutions or tools that consume the previous Visio XML file format (.vdx) files may need to be refactored in order to read the new file format and its schemas.

Visio Services retains the ability to display the Visio Web Drawing (.vdw) format in the browser. It now also renders the new Visio drawing (.vsdx) and Visio macro-enabled drawing (.vsdm) formats.

For more information about the new file format, see the article How to: Manipulate the Visio 2013 file format programmatically.

Themes

Themes have been redesigned in Visio 2013, making use of a greater variety of effects and styles including the integration of Shape Art effects. Users can now decide on an overarching style by applying a theme, personalize the diagram with theme variants, and highlight individual shapes with Quick Styles. ShapeSheet developers can take advantage of these features with new functions and cells in the ShapeSheet.

The user interface for applying theme variants is shown in the following figure.

 

 

You can also manipulate themes at the Page, Shape, and Selection object level. New APIs for working with themes include Page.SetTheme method, Page.SetThemeVariant method, Shape.SetQuickStyle method, and the Selection.SetQuickStyle method.

For more information about new VBA objects and members in Visio 2013, see the Visio Automation reference. For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

Change Shape

Visio 2013 includes a shape replacement API that enables you to swap one or more shapes for another shape contained in a stencil, while retaining some of the local values from the original shape, like the shape text shape, shape data, or shape formatting. Shape developers can update the ShapeSheet settings of their custom shapes to specify the Change Shape behavior for their shapes. Among the new APIs for Change Shape are the Shape.ReplaceShape and Selection.ReplaceShape methods and the ReplaceShapesEvent object.

The Change Shape feature lets you easily change a shape (in this case, the green rectangle)…

 

…to another shape, the green diamond.

For more information about the Change Shape feature, see Eric Schmidt’s blog post, Change shapes in Visio 2013.

For more information about new VBA objects and members in Visio 2013, see the Visio Automation reference. For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

Shape effects

New shape effects such as bevel, 3-D rotation, glow, reflection, and sketching have been added to Visio 2013. The ShapeSheet includes new cells for working with these effects. The following figure shows a shape to which effects have been applied.

You can also use Office VBA objects such as TextFrame2, GlowFormat, and ReflectionFormat and their members to apply shape effects.

For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

Commenting

Visio 2013 includes a new commenting framework. Comments can now be associated with a particular shape or page. Visio 2013 includes two new objects, Comments and Comment. New APIs for accessing comments programmatically include the Document.Comments, Page.Comments, Shape.Comments, and Page.ShapeComments properties.

The following images show what comments looked like in Visio 2010 and what they look like in Visio 2013.

 

 

Visio Services includes JavaScript APIs to read the comments from a page or shape in a diagram.

Note: You can no longer access comments in the ShapeSheet.

Coauthoring

Visio 2013 includes the ability to co-author diagrams stored on SharePoint or OneDrive. Developers have access to the Document.AfterDocumentMerge event which provides information about diagram changes due to coauthoring. Solution developers also have the ability to disable coauthoring to suit their custom needs by using the NoCoauth cell on the Document ShapeSheet.

Customizable image clipping

Visio 2013 supports defining a Custom Image Clipping path to crop images to any shape. This extends the capacities of Visio 2010, which supported clipping images in a rectangular way. This functionality is available in the ShapeSheet by using the ClippingPath cell in the Foreign Image Info section.

Relative geometries

In previous versions of Visio, shape geometry was defined by formulas that depended on the height or width of the shape. For example, in Visio 2010 the vertices of many built-in Visio shapes were defined by multiplying the height or width of the shape by a constant. These shapes had Geometry sections that included MoveTo or LineTo rows (for example) with formulas like Width1 and Height0.

Visio 2013 now supports relative geometry in the ShapeSheet. Shape developers can now use relative geometries to specify geometries as simple values or formulas, which multiply by the height or width automatically. You can now express Shape vertices by using constants, for instance—you no longer need to express vertices as multiples of the shape width or height. This makes it easier for you to create shapes that have better performance and smaller file sizes. New rows include the RelMoveTo and RelLineTo rows where the X and Y cell values are automatically multiplied by the width or height of the shape (respectively).

Support for Business Connectivity Services (BCS) data

Visio 2013 diagrams can now be connected to external lists on SharePoint Server 2013 servers. An external list is a content source external to SharePoint (for example, a SQL Server table) that has been connected to a SharePoint list by using Microsoft Business Connectivity Services (BCS). Visio Services supports the ability to refresh the Visio diagrams as the data updates.

For more information about what’s new in Visio Services, see the article Visio Services in SharePoint 2013. For more information about Business Connectivity Services (BCS), see Business Connectivity Services in SharePoint 2013.

Improvements in Visio Services

Visio Services in Microsoft SharePoint Server 2013 includes many improvements. As mentioned previously, Visio Services supports the new Visio file format (.vsdx and .vsdm). Visio Services has expanded data refresh and recalculation, including the ability to recalculate formulas across an entire diagram.

For more information about what’s new in Visio Services, see the article Visio Services in SharePoint 2013.

Duplicate page

You can now copy a page and all of its shapes within the same document in Visio 2013. Accordingly, the Page object has a new method, Duplicate, which duplicates the page and returns a new Page object.

Additional resources

Microsft Patterns and Practices : A look at the Security Development Life Cycle (SDL)

Microsoft Security Development Lifecycle (SDL) is an industry-leading software security assurance process. A Microsoft-wide initiative and a mandatory policy since 2004, the SDL has played a critical role in embedding security and privacy in Microsoft software and culture.

Combining a holistic and practical approach, the SDL introduces security and privacy early and throughout all phases of the development process. It has led Microsoft to measurable and widely-recognized security improvements in flagship products such as Windows Vista and SQL Server. Microsoft is publishing its detailed SDL process guidance to provide transparency on the secure software development process used to develop its products.

As part of the design phase of the SDL, threat modeling allows software architects to identify and mitigate potential security issues early, when they are relatively easy and cost-effective to resolve. Therefore, it helps reduce the total cost of development.

  •     The SDL Threat Modeling Tool Is Not Just a Tool for Security Experts
  • The SDL Threat ModelingTool is the first threat modeling tool which isn’t designed for security experts. It makes threat modeling easier for all developers by providing guidance on creating and analyzing threat models.
The SDL Threat Modeling Tool enables any developer or software architect to:

  • Communicate about the security design of their systems
  • Analyze those designs for potential security issues using a proven methodology
  •           Suggest and manage mitigations for security issues
  • SDL Threat Modeling Process
    SDL Threat Modeling Process
  •     Capabilities and Innovations of the SDL Threat Modeling Tool
  • The SDL Threat Modeling Tool plugs into any issue-tracking system, making the threat modeling process a part of the standard development process.
Innovative features include:

  • Integration: Issue-tracking systems
  • Automation: Guidance and feedback in drawing a model
  •  STRIDE per element framework: Guided analysis of threats and mitigations
  •   Reporting capabilities: Security activities and testing in the verification phase
  •   The Unique Methodology of the SDL Threat Modeling Tool
  • The SDL Threat Modeling Tool differs from other tools and approaches in two key areas:
  • It is designed for developers and centered on software Many threat modeling approaches center on assets or attackers. In contrast, the SDL approach to threat modeling is centered on the software. This new tool builds on activities that all software developers and architects are familiar with–such as drawing pictures for their software architecture.

 

  • It is focused on design analysis The term “threat modeling” can refer to either a requirements or a design analysis technique. Sometimes, it refers to a complex blend of the two. The Microsoft SDL approach to threat modeling is a focused design analysis technique.

 

Brand new 3 LINQ to Office Providers Available now!!

The SPSamurai.Office.LINQ namespace contains 3 classes –

OutlookProvider(LINQ to Outlook), OneNoteProvider (LINQ to OneNote) and ExcelProvider(LINQ to Excel).

The OutlookProvider is a wrapper class which provides IEnumerable collections to data of the COM interface of Outlook ( appointments, contacts, mails, tasks, …).

The OneNoteProvider provides collections of notebooks, sections and pages by manipulating the XML hierarchy tree of OneNote. And the ExcelProvider loads an Excel worksheet and provides column definition and row collections.

All collections are IEnumerable so you can query them with LINQ. The full source code is provided.

Check out my articles where I describe the implementation of these 3 classes and how to use them. These articles also contain a lot of LINQ query examples.

Class diagrams:

 

 

 

 

 

Features :
Set flag with due date from predefined list: Today, Tomorrow, This Week, Next Week or Custom  
Different options of follow-up visualization using combinations of flag, text and date  
Support of sorting and filtering features  
Support of different calendars (Gregorian, Japanese Emperor Era, Korean Tangun Era, Hijri, etc.)  
Supported Datasheet view  
Two-way conversion between ArtfulBits Follow-Up and standard Microsoft® SharePoint® Date and Time column  
Language pack support (desired localization could be added by request)  

 

Contact me at tomas.floyd@outlook.com for these tools and more SharePoint, Azure and Office 365 Apps, Tools and Web Parts or for specialised custom SharePoint Development

How To : Reserve Resources on the Calendar in SharePoint 2013 / Online

I suppose, many of you know about a great calendar feature in SharePoint 2010 called resource reservation. It enables organization of meetings in useful interface that allows to select multiple resources such as meeting rooms, projector and other facilities and required participants, and next the time frame that is free for all participants and facilities in the calendar view.

You can switch between week and day views.

Here is a screenshot of the calendar with resource reservation and member scheduling features:

You can change resources and participants in the form of your meeting, find free time frames in the diagram and check double booking:

There are two ways to add the resource reservation feature into SharePoint 2010 calendar:

  1. Enable web feature ‘Group Work Lists’, add calendar and go to its settings. Click ‘Title, description and navigation’ link in ‘General settings’ section. Here check ‘Use this calendar to share member’s schedule?’ and ‘Use this calendar for Resource Reservation?’
  2. Create a site based on ‘Group Work Site’ template.

Here is the detailed instructions: http://office.microsoft.com/en-us/sharepoint-server-help/enable-reservation-of-resources-in-a-calendar-HA101810595.aspx

SharePoint 2013 on-premise

After migration to SharePoint 2013 I discovered that these features were excluded from the new platform and saved only as backward compatibility.

So, you can migrate your application with installed booking calendar from SharePoint 2010 to SharePoint 2013 and you will keep functionality of resource reservation but you cannot activate it on a new SharePoint 2013 application through default interface.

Microsoft officially explained these restrictions by unpopularity of the resource reservation feature: http://technet.microsoft.com/en-us/library/ff607742(v=office.15).aspx#section1

First, I found a solution for SharePoint 2013 on-premise. It is possible to display the missing site templates including ‘Group Work Site’. Then you just need to create a site based on this template and you will get the calendar of resources.

Go to C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\1033\XML, open WEBTEMP.XML file, find an element with ‘Group Work Site’ title attribute and change its Hidden attribute from FALSE to TRUE.

SharePoint 2013 Online in Office 365

Perfect, now we can use free SharePoint booking system based on the standard calendar. But what about SharePoint Online in Office 365? We do not have an access to WEBTEMP.XML in its file system.

After some research I developed a sandbox solution that enables hidden ‘Group Work Lists’ feature and adds calendar with resource reservation and member scheduling features. Please, download it and follow the instructions to install:

  1. Go to the site collection settings.
  2. Open ‘Solutions’ area from ‘Web Designer Galleries’ section.
  3. Upload CalendarWithResources.wsp package and activate it.
  4. Now, navigate into the site where you wish to add the calendar with the enabled resource reservation feature.
  5. Open site settings -> site features.
  6. Activate ‘Calendar With Resources’ feature.

Great, now you have Group Calendar with an ability to book resources and schedule meetings.

This solution works for SharePoint 2013 on-premise as well, so you can use it instead of WEBTEMP.XML file modification.

Free download :

WSP File – http://1drv.ms/1f7ZSqO

 

When should I choose to create a mail app versus an add-in for Outlook?

When should I choose to create a mail app versus an add-in for Outlook?

Rate This
PoorPoorFairFairAverageAverageGoodGoodExcellentExcellent

Some of you may or may not be aware that alongside with the legacy COM-based Office client object models, Office 2013 supports a new apps for Office developer platform. This blog post is intended to help new and existing Office developers understand the main differences between the COM-based object models and the apps for Office platform. In particular, this post focuses on Outlook, suggests why you should consider developing solutions as mail apps, and identifies those exceptional scenarios where add-ins may still be the more appropriate choice.

Contents:

An introduction to the apps for Office platform

Architectural differences between add-in model and apps for Office platform

Main features available to mail apps

Major objects for mail apps

Reasons to create mail apps instead of add-ins for Outlook

Reasons to choose add-ins

Conclusion

Further references

An introduction to the apps for Office platform

The apps for Office platform includes a JavaScript API for Office and a schema for apps for Office manifests. You can use this platform to extend web services and content into the context of rich and web clients of Office. An app for Office is a webpage that is developed using common web technologies, hosted inside an Office client application (such as Outlook) on-premises or in the cloud. Of the three types of apps for Office, the type that Outlook supports is called mail apps. While you use the legacy APIs—the object model, PIA, and MAPI—to automate Outlook at an application level, you can use the JavaScript API for Office in a mail app to interact at an item level with the content and properties of an email message, meeting request, or appointment. You can publish mail apps in the Office Store or in an internal Exchange catalog. End users and administrators can install mail apps for an Exchange 2013 mailbox, and use mail apps in the Outlook rich client as well as Outlook Web App. As a developer, you can choose to make your mail app available for end users on only the desktop, or also on the tablet or smart phone. You can find more information about the apps for Office platform by starting here: Overview of apps for Office.

Architectural differences between add-in model and apps for Office platform

Add-in model

The Office add-in model offers individual object models for most of the Office rich clients. Each object model is intended to automate the corresponding Office client, and allows an add-in to integrate closely with the behavior of that client. The same add-in can integrate with one or multiple Office applications, such as Outlook, Word, and Excel, by calling into each of the Outlook, Word, and Excel object models. Figure 1 describes a few examples of 1:1 relationships between an Office rich client and its object model.

Figure 1. The legacy Office development architecture is composed of individual client object models.

 

Apps for Office platform

The apps for Office platform includes an apps for Office schema. Using this schema, each app specifies a manifest that describes the permissions it requests, its requirements for its host applications (for example, a mail app requires the host to support the mailbox capability), its support for the default and any extra locales, display details for one or more form factors, and activation rules for a mail app to be available in the app bar.

In addition to the schema, the apps for Office platform includes the JavaScript API for Office. This API spans across all supporting Office clients and allows apps to move toward a single code base. Rather than automating or extending a particular Office client at the application level, the apps for Office platform allows apps to connect to services and extend them into the context of a document, message, or appointment item in a rich or web client. Figure 2 shows Office applications with their rich and web clients sharing a common app platform.

Figure 2. The apps for Office development architecture is composed of a common platform and individual object models.

 

One main difference of note is that the object models were designed to integrate tightly with the corresponding Office client applications. However, this tight integration has a side effect of requiring an add-in to run in the same process as the rich client. The reliability and performance of an add-in often affects the perceived performance of the rich client. Unlike client add-ins, an app for Office doesn’t integrate as tightly with the host application, does not share the same process as the rich client, and instead runs in its own isolated runtime environment. This environment offers a privacy and permission model that allows users and IT administrators to monitor their ecosystem of apps and enjoy enhanced security.

Main features available to mail apps

Contextual activation: Mail app activation is contextual, based on the app’s activation rules and current circumstances, including the item that is currently displayed in the Reading Pane or inspector. A mail app is activated and becomes available to end users when such circumstances satisfy the activation rules in the app manifest.

Matching known entities or regular expression: A mail app can specify certain entities (such as a phone number or address) or regular expressions in its activation rules. If a match for entities or regular expressions occurs in the item’s subject or body, the mail app can access the match for further processing.

Roaming settings: A mail app can save data that is specific to Outlook and the user’s Exchange mailbox for access in a subsequent Outlook session.

Accessing item properties: A mail app can access built-in properties of the current item, such as the sender, recipients, and subject of a message, or the location, start, end, organizer, and attendees of a meeting request.

Creating item-level custom properties: A mail app can save item-specific data in the user’s Exchange mailbox for access in a subsequent Outlook session.

Accessing user profile: A mail app can access the display name, email address, and time zone in the user’s profile.

Authentication by identity tokens: A mail app can authenticate a user by using a token that identifies the user’s email account on an Exchange Server.

Using Exchange Web Services: A mail app can perform more complex operations or get further data about an item through Exchange Web Services.

Permissions model and governance: Mail apps support a three-tier permissions model. This model provides the basis for privacy and security for end users of mail apps.

Major objects for mail apps

For mail apps, you can look at the JavaScript API for Office object model in three layers:

  1. In the first layer, there are a few objects shared by all three types of apps for Office: Office, Context, and AsyncResult.
  2. The second layer in the API that is applicable and specific to mail apps includes the Mailbox, Item, and UserProfile objects, which support accessing information about the user and the item currently selected in the user’s mailbox.
  3. The third layer describes the data-level support for mail apps:
    1. There are CustomProperties and RoamingSettings that support persisting properties set up by the mail app for the selected item and for the user’s mailbox, respectively.
    2. There are the supported item objects, Appointment and Message, that inherit from Item, and the MeetingRequest object that inherits from Message. These objects represent the types of Outlook items that support mail apps: calendar items of appointments and meetings, and message items such as email messages, meeting requests, responses, and cancellations.
    3. Then there are the item-level properties (such as Appointment.subject) as well as objects and properties that support certain known Entities objects (for example Contact, MeetingSuggestion, PhoneNumber, and TaskSuggestion).

Figure 3 shows the major objects: Mailbox, Item, UserProfile, Appointment, Message, Entities, and their members.

Figure 3. Major objects and their members used by mail apps in the JavaScript API for Office.

Figure 4 shows all of the objects and enumerations in the JavaScript API for Office that pertain to mail apps.

Figure 4. All objects for mail apps in the JavaScript API for Office.

Figure 5 is a thumbnail of a diagram with all the objects and members that mail apps use. Zoom into the diagram at http://go.microsoft.com/fwlink/?LinkId=317271.

Figure 5. All objects and members used by mail apps in the JavaScript API for Office.

The following are common reasons why mail apps are a better choice for developers than add-ins:

  • You can use existing knowledge of and the benefits of web technologies such as HTML, JavaScript, and CSS. For power users and new developers, XML, HTML, and JavaScript require less significant ramp-up time than COM-based APIs such as the Outlook object      model.
  • You can use a simple web deployment model to update your mail app (including the web services that the app uses) on your web server without any complex installation on the Outlook client. In fact, any updates to the mail app, with the exception of the app manifest, do not require any updating on the Office client. You can update the code or user interface of the mail app conveniently just on the web server. This presents a significant advantage over the administrative overhead involved in updating add-ins.
  • You can use a common web development platform for mail apps that can roam across the Outlook rich client and Outlook Web App on the desktop, tablet, and smartphone. On the other hand, add-ins use the object model for the Outlook rich client and, hence, can run on only that rich client on a desktop form factor.
  • You can enjoy rapid turnaround of building and releasing apps via the Office Store.
  • Because of the three-tier permissions model, users and administrators can appreciate better security and privacy in mail apps than add-ins, which have full access to the content of each account in the user’s profile. This, in turn, encourages user consumption of apps.
  • Depending on your scenarios, there are features unique to mail apps that you can take advantage of and that are not supported by add-ins:
    • You can specify a mail app to activate only for certain contexts (for example, Outlook displays the app in the app bar only if the message class of the user-selected appointment is IPM.Appointment.Contoso, or if the body of an email contains a package       tracking number or a customer identifier).
    • You can activate a mail app if the selected message contains some known entities, such as an address, contact, email address, meeting suggestion, or task suggestion.
    • You can take advantage of authentication by identity tokens and of Exchange Web Services.

Reasons to choose add-ins

The following features are unique to add-ins and may make them a more appropriate choice than mail apps in some circumstances:

  • You can use add-ins to extend or automate Outlook at an application-level, because the object model and PIA have extensive integration with Outlook features (such as all Outlook item types, user interface, sessions, and rules). At the item-level, add-ins can interact with an item in read or compose mode. With mail apps, you cannot automate Outlook at the application level, and you can extend Outlook’s functionality in the context of only the read-mode of the supported items (messages and appointments) in the user’s mailbox.
  • You can specify custom business logic for a new item type.
  • You can modify and add custom commands in the ribbon and Backstage view.
  • You can display a custom form page or form region.
  • You can detect events such as sending an item or modifying properties of an item.
  • You can use add-ins on Outlook 2013 and Exchange Server 2013, as well as earlier versions of Outlook and Exchange. On the other hand, mail apps work with Outlook and Exchange starting in Outlook 2013 and Exchange Server 2013, but not earlier versions.

Conclusion

When you are considering creating a solution for Outlook, first verify whether the supported major features and objects of the apps for Office platform meet your needs. Develop your solution as a mail app, if possible, to take advantage of the platform’s support across Outlook clients over the desktop, tablet, and smartphone form factors. Note that there are still some circumstances where add-ins are more appropriate, and you should prioritize the goals of your solution before making a decision.

Further references

Apps for Office and mail apps

How To : Use Powershell Scripts in Office 365 through the SharePoint CSOM

When we first started to work with Office 365, I remember being quite concerned at the lack of PowerShell cmdlets – basically all the commands we’re used to using do not exist there. Here’s a gratuitous graph to illustrate the point:

image

So yes, nearly 800 PowerShell commands in SP2013 (up from around 530 in SP2010) down to a measly 30 in SharePoint Online. And those 30 mainly cover basic operations with sites, users and permissions – no scripting of, say, Managed Metadata, user profiles, search and so on. It’s true to say that some of these things are now available down at site-collection scope (needed, of course, when you don’t have a true “Central Admin” site but there are still “tenant-level” settings that you want to use script for rather than make manual changes through the UI.

So what’s a poor developer/administrator to do?

The answer is to write PowerShell as you always did, but embed CSOM code in there. More examples later, but here’s a small illustration:

# get the site collection scoped Features collections (e.g. to activate one) – not showing how to obtain $clientContext here..
$siteFeatures = $clientContext.Site.Features
$clientContext.Load($siteFeatures)
$clientContext.ExecuteQuery()

So we’re using the .NET CSOM, but instead of C# we are using PowerShell’s ability to call any .NET object (indeed, nearly every script will use PowerShell’s New-Objectcommand). All the things we came to love about PowerShell are back on the table:

  • Scripts can be easily amended, no need to recompile (or open Visual Studio)
  • We can debug with PowerGui or PowerShell ISE
  • We can leverage other things PowerShell is good at e.g. easily reading from XML files, using other PowerShell modules and other APIs (including .NET) etc.

Of course, we can only perform operations where the method exists in the .NET CSOM – that’s the boundary of what we can do.

Getting started

Step 1 – understand the landscape

The first thing to understand is that there are actually 3 different approaches for scripting against Office 365/SharePoint Online, depending on what you need to do. It might just be me, but I think that when you start it’s easy to get confused between them, or not fully appreciate that they all exist. The 3 approaches I’m thinking of are:

  • SharePoint Online cmdlets
  • MSOL cmdlets
  • PowerShell + CSOM

This post focuses on the last flavor. I also wrote a short companion post about the overall landscape and with some details/examples on the other flavors, at Using SharePoint Online and MSOL cmdlets in PowerShell with Office 365

Step 2 – prepare the machine you will run scripts against SharePoint Online

Option 1 – if you will NOT run scripts from a SP2013 box (e.g. a SP2013 VM):

You need to obtain the SharePoint DLLs which comprise the .NET CSOM, and copy them to a folder on your machine – your scripts will reference these DLLs.

  1. Go to any SharePoint 2013 server, and copy any DLL
  2. which starts with Microsoft.SharePoint.Client*.dll from the C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI folder.
  3. Store them in a folder on your machine e.g. C:\Lib – make a note of this location.

CSOM DLLs

Option 2 – if you WILL run scripts from a SP2013 box (e.g. a SP2013 VM):

In this case, there is no need to copy the DLLs – your scripts will reference them in the original SharePoint install location (C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI).

The top of your script – referencing DLLs and authentication

Each .ps1 file which calls the SharePoint CSOM needs to deal with two things before you can use the API – loading the CSOM types, and authenticating/obtaining a ClientContext object. So, you’ll need this at the top of your script:

** N.B. My newer code samples do not show in RSS Readers – click here for full article **
# replace these details (also consider using Get-Credential to enter password securely as script runs)..
$username = “SomeUser@SomeOrg.onmicrosoft.com”
$password = “SomePassword”
$securePassword = ConvertTo-SecureString $Password -AsPlainText -Force
# the path here may need to change if you used e.g. C:\Lib..
Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll”
Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll”
# note that you might need some other references (depending on what your script does) for example:
Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Taxonomy.dll”
# connect/authenticate to SharePoint Online and get ClientContext object..
$clientContext = New-Object Microsoft.SharePoint.Client.ClientContext($url)
$credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $securePassword)
$clientContext.Credentials = $credentials
if (!$clientContext.ServerObjectIsNull.Value)
{
Write-Host “Connected to SharePoint Online site: ‘$Url'” -ForegroundColor Green
}
view rawTopOfScript_PSCSOM.ps1 hosted with ❤ by GitHub

In the scripts which follow, we’ll include this “top of script” stuff by dot-sourcing TopOfScript.ps1 in every script below – you could follow a similar approach (perhaps with a different name!) or simply paste that stuff into every script you create. If you enter a valid set of credentials and URL, running the script above should see you ready to rumble:

PS CSOM got context

Script examples

Activating a Feature in SPO

Something you might want to do at some point is enable or disable a Feature using script. The script below, like the others that follow it, all reference my TopOfScript.ps1 script above:

** N.B. My newer code samples do not show in RSS Readers – click here for full article **
. .\TopOfScript.ps1
[bool]$enable = $true
[bool]$force = $false
# using the Minimal Download Strategy Feature here..
$FeatureId = [GUID](“87294C72-F260-42f3-A41B-981A2FFCE37A”)
# ..and working with the web-scoped Features – use $clientContext.Site.Features for site-scoped Features
$webFeatures = $clientContext.Web.Features
$clientContext.Load($webFeatures)
$clientContext.ExecuteQuery()
if ($enable)
{
$webfeatures.Add($featureId, $force, [Microsoft.SharePoint.Client.FeatureDefinitionScope]::None)
}
else
{
$webfeatures.Remove($featureId, $force)
}
try
{
$clientContext.ExecuteQuery()
if ($enable)
{
Write-Host “Feature ‘$FeatureId’ successfully activated..”
}
else
{
Write-Host “Feature ‘$FeatureId’ successfully deactivated..”
}
}
catch
{
Write-Error “An error occurred whilst activating/deactivating the Feature. Error detail: $($_)
}
view rawActivateOrDeactivateFeature_PSCSOM.ps1 hosted with ❤ by GitHub

PS CSOM activate feature

Enable side-loading (for app deployment)

Along very similar lines (because it also involves activating a Feature), is the idea of enabling “side-loading” on a site. By default, if you’re developing a SharePoint app it can only be F5 deployed from Visual Studio to a site created from the Developer Site template, but by enabling “side-loading” you can do it on (say) a team site too. Since the Feature isn’t visible (in the UI), you’ll need a script like this:

** N.B. My newer code samples do not show in RSS Readers – click here for full article **
. .\TopOfScript.ps1
[bool]$enable = $true
[bool]$force = $false
# this is the side-loading Feature ID..
$FeatureId = [GUID](“AE3A1339-61F5-4f8f-81A7-ABD2DA956A7D”)
# ..and this one is site-scoped, so using $clientContext.Site.Features..
$siteFeatures = $clientContext.Site.Features
$clientContext.Load($siteFeatures)
$clientContext.ExecuteQuery()
if ($enable)
{
$siteFeatures.Add($featureId, $force, [Microsoft.SharePoint.Client.FeatureDefinitionScope]::None)
}
else
{
$siteFeatures.Remove($featureId, $force)
}
try
{
$clientContext.ExecuteQuery()
if ($enable)
{
Write-Host “Feature ‘$FeatureId’ successfully activated..”
}
else
{
Write-Host “Feature ‘$FeatureId’ successfully deactivated..”
}
}
catch
{
Write-Error “An error occurred whilst activating/deactivating the Feature. Error detail: $($_)
}
view rawEnableSideLoading_PSCSOM.ps1 hosted with ❤ by GitHub

PS CSOM enable side loading

Iterating webs

Sometimes you might want to loop through all the webs in a site collection, or underneath a particular web:

** N.B. My newer code samples do not show in RSS Readers – click here for full article **
1234567891011121314151617181920
. .\TopOfScript.ps1
$rootWeb = $clientContext.Web
$childWebs = $rootWeb.Webs
$clientContext.Load($rootWeb)
$clientContext.Load($childWebs)
$clientContext.ExecuteQuery()
function processWeb($web)
{
$lists = $web.Lists
$clientContext.Load($web)
$clientContext.ExecuteQuery()
Write-Host “Web URL is” $web.Url
}
foreach ($childWeb in $childWebs)
{
processWeb($childWeb)
}
view rawIterateWebs.ps1 hosted with ❤ by GitHub

PS CSOM iterate webs

(Worth noting that you also see SharePoint-hosted app webs also in the image above, since these are just subwebs (albeit ones which get accessed on the app domain URL rather than the actual host site’s web application URL).

Iterating webs, then lists, and updating a property on each list

Or how about extending the sample above to not only iterate webs, but also the lists in each – the property I’m updating on each list is the EnableVersioning property, but you easily use any other property or method in the same way:

** N.B. My newer code samples do not show in RSS Readers – click here for full article **
. .\TopOfScript.ps1
$enableVersioning = $true
$rootWeb = $clientContext.Web
$childWebs = $rootWeb.Webs
$clientContext.Load($rootWeb)
$clientContext.Load($childWebs)
$clientContext.ExecuteQuery()
function processWeb($web)
{
$lists = $web.Lists
$clientContext.Load($web)
$clientContext.Load($lists)
$clientContext.ExecuteQuery()
Write-Host “Processing web with URL “ $web.Url
foreach ($list in $web.Lists)
{
Write-Host “– “ $list.Title
# leave the “Master Page Gallery” and “Site Pages” lists alone, since these have versioning enabled by default..
if ($list.Title -ne “Master Page Gallery” -and $list.Title -ne “Site Pages”)
{
Write-Host “—- Versioning enabled: “ $list.EnableVersioning
$list.EnableVersioning = $enableVersioning
$list.Update()
$clientContext.Load($list)
$clientContext.ExecuteQuery()
Write-Host “—- Versioning now enabled: “ $list.EnableVersioning
}
}
}
foreach ($childWeb in $childWebs)
{
processWeb($childWeb)
}
view rawIterateWebsAndListsEnableVersioning.ps1 hosted with ❤ by GitHub

PS CSOM iterate lists enable versioning

Import search schema XML

In SharePoint 2013 and Office 365, many aspects of search configuration (such as Managed Properties and Crawled Properties, Query Rules, Result Sources and Result Types) can be exported and importing between environments as an XML file. The sample below shows the import operation handled with PS + CSOM:

** N.B. My newer code samples do not show in RSS Readers – click here for full article **
. .\TopOfScript.ps1
# need some extra types bringing in for this script..
Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Search.dll”
# TODO: replace this path with yours..
$pathToSearchSchemaXmlFile = “C:\COB\Cloud\PS_CSOM\XML\COB_TenantSearchConfiguration.xml”
# we can work with search config at the tenancy or site collection level:
#$configScope = “SPSiteSubscription”
$configScope = “SPSite”
$searchConfigurationPortability = New-Object Microsoft.SharePoint.Client.Search.Portability.SearchConfigurationPortability($clientContext)
$owner = New-Object Microsoft.SharePoint.Client.Search.Administration.SearchObjectOwner($clientContext, $configScope)
[xml]$searchConfigXml = Get-Content $pathToSearchSchemaXmlFile
$searchConfigurationPortability.ImportSearchConfiguration($owner, $searchConfigXml.OuterXml)
$clientContext.ExecuteQuery()
Write-Host “Search configuration imported” -ForegroundColor Green
view rawImportSearchSchema.ps1 hosted with ❤ by GitHub

PS CSOM import search schema

Summary

As you can hopefully see, there’s lots you can accomplish with the PowerShell and CSOM combination. Anything that can be done with CSOM API can be wrapped into a script, and you can build up a library of useful PowerShell snippets just like the old days. There are some interesting things that you CANNOT do with CSOM (such as automating the process of uploading/deploying a sandboxed WSP to Office 365), but there ARE approaches for solving even these problems, and I’ll most likely cover this (and our experiences) in future posts.

A final idea on the PowerShell + CSOM front is the idea that you can have “hybrid” scripts which can deal with both SharePoint Online and on-premises SharePoint. For example, on my current project everything we build must be deployable to both SPO and on-premises, and our scripts take a “DeploymentTarget” parameter where the values can be “Online” or “OnPremises”. There are some differences (i.e. branching) in the scripts, but for many operations the same commands can be run.

How To : Use the Content Query Web Part for SharePoint 2013 Search

Meeting client requirements with SharePoint often involves aggregating items somehow – often we want to display things like “all the overdue tasks across all finance sites”, or “navigation links to all of the subsites of this area” or “related items (e.g. tagged with the same term)” and so on. In SharePoint 2010 there have been two main ways of accomplishing this:

SharePoint-2013-Service-Pack-1-225x93

  • Content Query web part
  • Custom solution built on SPSiteDataQuery (site collection-scoped), SPQuery (list-scoped) or search API

To a lesser extent, using the search web parts as part of a custom solution may also have been an option. Regardless, it was common to need custom code to meet such requirements. Maybe we needed to add paging to the results, or we needed to use some value obtained dynamically through code (e.g. from the current site/current page/current user/something else) – several Codeplex solutions arose from this gap, and lots of lines of code were written.

SharePoint 2013 presents the Content Search web part as a new option – it’s capabilities mean that simply using the web part (with some front-end work to meet look and feel requirements) will meet many needs, without use of custom code. If you’re a developer, the following screenshot should give you a clue as to why code won’t be required too often (with one of my favorite options highlighted):

CSWP_BasicsTab_AdvancedMode_PropertyFilterValues

It’s incredibly powerful, and it’s a good idea to understand what it can do.

Understanding the deal with search-based solutions

As the name suggests, the Content Search web part is powered by SharePoint’s search function. As such, there are the following considerations:

  • The CSWP can be configured to “see” items anywhere in SharePoint (potential advantage)
    • In contrast, the CQWP and related SPSiteDataQuery can only search within the current site collection – the site collection “boundary” is a factor
  • Results shown are not guaranteed to be 100% up-to-date (potential disadvantage) 
    • Since a search crawl has to run before any content changes will be shown in search results (remember this can include titles, summaries, images and so on for pages/documents), if a user creates/edits an item it will not be shown immediately. This can be a critical point.
    • Furthermore, my understanding from a FAST engineer is that in SharePoint 2013 there is no longer any means of pushing a document directly into the search index – in previous FAST incarnations including FAST for SharePoint 2010, there were options such as docpush.exe for “proactively” add an item to the index, rather than waiting for the next search crawl.
    • That said, it should be possible to obtain much lower indexing latencies in SharePoint 2013 via the “Continuous Crawl’” capability. In most deployments, my guess would be that changes would be reflected within a few minutes at most if this is enabled (where previously you may have had an incremental crawl scheduled every 15, 30 or 60 minutes for a SharePoint sites content source.

Summary – if the functionality you are creating needs fully up-to-date results (e.g. a user has created/edited something and it needs to be immediately reflected in the site) then you will probably need to stick with the original approaches (i.e. a query-based rather than search-based solution).

Terminology – new concepts in SharePoint 2013 search

So if we’re going to build solutions built on SP2013 search, we need to have a basic understanding of some concepts – we’ll run into these time and time again:

Concept

My quick definition

Result Source Like a search ‘scope’ in SP2007/SP2010, but on steroids. Rules are specified to say what the scope consists of – e.g. DOCUMENTS in my TEAM SITES area (constraining on content type and path in this example).

Created centrally, or at the web level. Result Sources can be used in just about any search-related functionality, including the Content Search web part.

Query Rule Like a ‘best bet’ on steroids. Ability to do specially formatted results at top of results list (e.g.Promoted Result) for highly-recommended content. In addition to Promoted Result, we can also do a Result Block (example could be a block of 5 image results within main list of text links).

Another option is to Change the Ranked Results – i.e. put something at the top, promoteor demote something by 1-10 (previously known as a ‘boost’ in FAST)

LOTS of flexibility in matching the user’s query, including regular expressions and matching terms in the Managed Metadata store.

Display Templates A Display Template is a JavaScript template (similar to jQuery templates) which controls formatting – in the case of the CSWP, this effectively replaces the use of XSL for look and feel. There is a separate template to pick for the overall control and formatting of an individualitem. The .js files for the templates are stored in the ‘Content Web Parts’ subfolder of the Master Page Gallery.

Side note – in the context of a search results page (rather than CSWP), a Display Template is associated with a Result Type (e.g. Word doc, wiki page, PowerPoint file etc.) and so we have granular control over how each is displayed (and when). Extremely cool.

So, lots of flexibility in the search infrastructure. Let’s see some of this in the context of the Content Search web part.

Configuring the Content Search web part

There are two main aspects to this:

  • Displaying the right items (Search Criteria)
  • Look and feel (Display Templates)

In terms of the search criteria, there is enormous flexibility in what the CSWP – and the underlying search capability – can do. For one thing, it’s possible to either directly configure the query entirely in the properties of this web part instance (e.g. show me all documents which meet criteria X), and/or start from a pre-existing Result Source to do some of the filtering. Combining the approaches will be fairly common – an example could be “search only on wiki pages” (an OOTB Result Source) but only show items tagged with X (this defined directly in the CSWP properties).

Interestingly, configuring a centralized Result Source and a Content Search web part on a page are very similar, even though it would seem some sort of “reusable scope” and a web part are very different things in SharePoint. The overlap comes because underneath both there is a search query which does the work of isolating the desired results – indeed, as we’ll see later the same “Query Builder” UI is used in both places (with a couple of minor differences). So, if you’ve learnt how to configure a CSWP you’ve essentially also learned how to create  a custom Result Source.

 

Configuring the web part

The first thing to understand is that the Content Search web part appears in different guises in the web part gallery. The ‘main’ web part is in the ‘Content Rollup’ category:

CBS_MainWebPartInAdder

But there are also many pre-configured versions available, each of which finds a specific type of content. This is great for end-users who don’t necessarily think in terms of needing a ‘Content Search’ web part:

CBS_WebPartsInAdder
And just to prove the point, the web parts above correspond to the following .webpart definition files in the Web Part Gallery:

CBS_WebParts

Once the web part has been added to the page, it can be configured by it’s tool pane. The main configuration item is the query to use, and this can be started by clicking the ‘Change query’ button:

CSWP_properties
This opens the “’Build Your Query” dialog – this has tabs labeled BASICS, REFINERS, SORTING, SETTINGS and TEST. This thing is known (unsurprisingly) as the Query Builder – what you might not realize, is that it’s used in several places in SharePoint 2013:

  • Configuring a Content Search web part (obviously)
  • Creating a Result Source (specifically in the Query Transform section)
  • Configuring a Search Results web part

There are some differences – for example, when configuring a Search Results web part there is no SORTING tab because this will be handled in the Result Source or the query. I’m going to talk about things from the perspective of the Content Search web part, but will call out any differences for the other usages – so hopefully by learning the CSWP, you also get to learn 75% of the search infrastructure.

BASICS tab – Quick Mode

Although the first tab is labeled ‘BASICS’, I’d say it’s actually the most involved – this is where the query itself is configured, and there is a ‘Quick Mode’ and ‘Advanced Mode’. You’ll also notice that – and let me just say I’d personally be willing to give the Product Manager for this feature A BIG HUG for this – that there’s a “live” results preview pane, permanently visible on the right-hand side of the Query Builder. This shows the first 10 results which would display from running the currently configured search against the current index, without the need to save the web part after each change:

CSWP_BasicsTab_QuickMode

Note that if you create your own query, then this preview pane is only able to show results when you are on the TEST tab. And we’ll talk about that towards the end.

Let’s now walk through the various configuration steps in here.

Select a query

In Quick Mode, the dropdown contains the Result Sources (see my definition above if you’ve forgotten already :)) which come out-of-the-box with SharePoint 2013 – one of these may provide a good foundation for what you need:

CSWP_BasicsTab_QuickMode_SelectQuery
As you select a Result Source from the dropdown, other options may become available lower down. So if I want to find items matching a specific content type, I get this:

RestrictByContentType
In fact, this option to restrict by content type appears for many of the pre-defined Result Sources, not just “Items matching a content type” – which makes sense, because it’s a common thing to include as a filter. Similarly, “Items matching a tag” and several other queries give this interface for selecting a tag to filter on:

RestrictByTag
And, happy days, if I specify the tag by typing one I get auto-complete to help me pick the term – this is a fully-fledged Managed Metadata input field. Consequently there’s also full validation of the terms you type-in (though this takes a few seconds to show), so if an author accidentally enters something which isn’t a known term, he/she should spot the mistake immediately:

TermValidation

Consider also that those middle options of using the navigation term associated with the current page is exactly what’s needed to build many types of ‘related items’ functionality – again, no code needed now.

Restrict results by app

In the next section, I can restrict the scope of the results to a particular location (e.g. the current site). This enables me to get something like the Content Query web part behavior of only searching within the current site collection if needed – because although we now have the power, it won’t always make sense to go across the entire farm 🙂

RestrictByApp

Add additional filters

In the next section I can supplement the query with any valid query text, e.g. a property filter. In this example, I’m adding a filter to only present items which werecreated by the current user:

AdditionalFilter

Sort results

When we scope our query to a pre-defined Result Source (as we are here in the CSWP ‘Quick Mode’), then sorting is usually pre-defined at that level. The CSWP does give us the opportunity to override sorting based on based on some popularity ranking models (around most viewed/most clicked) instead though – expect proper wording to appear in this dropdown in the RTM version, but you get the idea: 

SortResults
So what happens if none of the options presented so far do what you want? An example could be wanting to use an existing Result Source (e.g. ‘wiki pages’) but sort on Last Modified in descending order. Obviously the dropdown above does not allow that. We could create a custom Result Source and implement the query/sorting there, but that only really makes sense if we expect it to be re-used in multiple places.

In these cases, we can click into Advanced Mode (still on the BASICS tab).

BASICS tab – Advanced Mode

In Advanced Mode you basically get to specify the full query text yourself. In my mind, this is like building a solution with the search API in SP2007/SP2010 – I saw many custom solutions (and built several myself) which used the FullTextSqlQuery or KeywordQuery classes to find the right items. SharePoint 2013 makes it much easier to have this full control whilst still piggybacking onto the out-of-the-box web parts – meaning less work and more productivity.

When switching to the Advanced Mode, a couple of things become available:

  • A SORTING tab (details later)
  • Controls to help you build the query (which you’d previously do essentially by hand in earlier versions), with ‘Keyword filter’ and ‘Property filter’ options. These can be combined as you like, and the resulting query text appears in the textbox at the bottom:

CSWP_BasicsTab_AdvancedMode

Avoid custom code by using tokens

There are many tokens which can be used when building a query in this way – often you might want to pass something into the query, such as a URL (querystring) parameter, the value in a particular field on the page, and so on. Being able to do this unlocks a huge range of possibilities for building solutions. This is where the first image in this article comes from – here’s a reminder:

CSWP_BasicsTab_AdvancedMode_PropertyFilterValues

In summary, when using the Advanced Mode of the query builder you should be able to target just about any content in your SharePoint environment.

SORTING tab (Advanced Mode only)

In SharePoint 2010 Enterprise Search, you could only sort by relevance/rank (the normal search engine approach) or date. FAST for SharePoint 2010 had more options (you could sort by a Managed Property). In SharePoint 2013, frankly the sort options alone are enough to blow your mind 🙂  If you don’t need anything specific around sorting then you can skip this bit, but if you do then here are your options:

First you can sort by way more things than just rank and date:

CSWP_SortTab
One thing to note there – I’m unclear as to what makes it into that ‘Sort by’ list and what does not. It’s not Managed Properties as far as I can tell, so although the list is long many options may not be hugely useful. Still, better than before.

Usefully, you can now do multi-level sorting (sort by this, then by that). The ‘Add sort level’ link in the image above adds another row, allowing me to do things like sorting by URL depth (so items higher up in the site hierarchy show at the top), and then by rank (that makes sense, because there’ll be lots of items at the same URL depth so I do need two levels of sorting):

CSWP_SortTab_Custom

Note that effectively what I’m doing here is building some sort of custom ranking model. This works great if I need something very specific on sorting, but also note SharePoint 2013 comes with several ranking models – the next section allows me to pick from these if I’ve left the ‘Sort by’ dropdown on ‘Rank’, unlike in the image above. This is because all these options are effectively different forms of rank – most are around People Search or popularity:

CSWP_SortTab_RankingModel

And for those occasions when the client is telling you that his/her strategic document really has to be on page 1 of the results (but not a Promoted Result/best bet), you have ‘Dynamic ordering’ – you can boost/demote results, including the option to promote to the top:

CSWP_SortTab_DynamicOrdering

REFINERS tab

In the context of search, refiners are usually the links on the search engine’s results page (typically in the left nav) which allow the user to further filter the results. So if I do a search for “meeting minutes” and get lots of results, it would be nice to be able to filter by, say:

  • Date range
  • SharePoint site (since minutes might be stored in individual project sites)
  • Author
  • ..and so on

However, in the context of the Content Search web part, refiners actually allow you to do this filtering as part of the initial query. The REFINERS tab is effectively a convenience to you, the person configuring the web part – what happens is that a search is performed whilst in edit mode, and all relevant refiners (e.g. managed properties) are presented as available refiners. These can be selected and moved over to the right-hand list:

CSWP_RefinersTab
The effect of this is that a further filter is added to my query. In the example above, this may be easier than using a Property Filter on the BASICS tab – since there I have little support, I just select the property and type the value:

CSWP_BasicsTab_PropertyFilter
In the REFINERS tab, SharePoint is doing the search for me (as it’s configured so far), and only coming back with values which have been found in the returned results.

SETTINGS tab

The SETTINGS tab controls some high-level options for running the search:

CSWP_SettingsTab

Query rules

Since these can be defined at the parent site or search service, it could be the case that your CSWP gets affected by one of these. As the radio button shows, this can be overridden, but consider that some types of Query Rules may not have an effect anyway – as a reminder (from the table at the beginning), a Query Rule can either:

  • Add a promoted result
  • Add a result block
  • Change the ranked results somehow (by modifying the query)

Out of these 3 actions, 1.5 of them could affect the results of a ‘default’ CSWP. This can be summarized:

Query Rule Action

Will affect CSWP results?

Add a promoted result Not by default. When a search runs in SharePoint, multiple result sets are returned (e.g. ‘main results’, ‘best bet results’ and so on – in SP2013, the real names for these are ‘RelevantResults’, ‘SpecialTermResults’, ‘PersonalFavoriteResults’ and ‘RefinementResults’.). Although a CSWP can be configured to show any table, the default is ‘RelevantResults’ – and a promoted result gets added to ‘SpecialTermResults’.
Add a result block Yes if result block is configured to show ‘ranked within core results’ (the default), rather than ‘shown above core results’.
Change ranked results Yes.

For completeness, here’s the place in the CSWP where you select which search result set to use (e.g. if you want to switch from the default of ‘RelevantResults’:

CSWP_ResultTableSelection

Options in the Results Table dropdown (shown to the left):

CSWP_ResultTableSelectionOptions

URL rewriting

This one is fairly simple – if results are being returned from a catalog which is using “friendly” URLs, then the CSWP can override this to use the original URLs. It may not always make sense to use rewritten URLs in aggregations outside of the catalog pages, especially if you’ve implemented anything funky there.

Loading behavior

This is useful – specify whether the CSWP web part instance should load in the main page load (default) or in an AJAX manner after the main page has finished. Considering that a CSWP could either be the centerpiece of your landing page or merely some page footer navigation, it’s nice to be able to prioritize in this way.

Priority

Similarly, we can actually specify High, Medium or Low priority for each CSWP instance we use – great for the different usages we will have, although as per the description, note this only has any effect if the search service is overloaded.

TEST tab

The TEST tab is hugely useful – it provides you the ability:

  • To see the underlying query text (in Keyword Query Language [KQL]) which has been generated (though it must be edited in other tabs)
  • To see the preview when you are defining a query yourself (the preview pane will be empty on other tabs in this scenario)

CSWP_TestTab_Less
Which is all great, but at first glance it’s easy to miss some extra functionality – if the ‘Show more’ link is clicked, other information becomes visible including details on any refiners and Query Rules which have been applied. So below I can see that a custom Query Rule I created has indeed been used, so there’s no guesswork on (for example) whether a certain item is actually being promoted or not:

CSWP_TestTab_More

Sidenote – listing items from ONE site/list/library with the Content Search web part

Worthy of a quick note – if all you need to do is roll-up content from one list/library, then you can do this with the CSWP – in the query, simply restrict the search using PATH:[URL to document library]. The Query Builder UI helps you do this by providing the ‘Restrict by app’ area:

CSWPrestricttositeorlibrary_thumb2

N.B. note that one potential gotcha here can be that you need ‘HTTP’ if your sites are browsed on HTTPS but crawled on HTTP (as in my case).

If you do want to filter by site/list/library, consider of course that the good ol’ Content Query web part will work just fine here, and you’ll get instant changes as content is changed. What you won’t have, is the Content Search Web Part’s ability to automatically use tokens in the query (e.g. value of current navigation category, value from current user’s profile etc.)

Summary

The Content Search web part is a great tool in the SharePoint consultant’s box of tricks. Configuration may prove quite simple for some scenarios, but there is also huge amount of flexibility and so a certain degree of complexity comes with that. Many advanced scenarios which make use SP2013 search capabilities (such as Result Sources, Query Rules, promoted results and so on) will be possible – knowing the details will help you identify whether the CSWP can be the answer to a particular problem or not.

COMING SOON – The “User Poll” Web Part for SharePoint 2010 & 2013

The User Poll Web Part provides your SharePoint environment with a set of web parts to allow your end users to create simple polls. It does this without the hassle of the standard SharePoint surveys which is not intended to create a simple 1 question poll.

The User Poll Web Part is a poll web part for SharePoint and it allows site users to quickly create polls anywhere in the Site Collection. The poll Web Part is designed to provide a user friendly interface: Important settings and actions are available from within the Web Part.

There is no direct need to work with the SharePoint Web Part setting menu and poll data is managed from normal SharePoint lists. Administrators can manage and keep track of all created polls from a centralized list.

A standard SharePoint installation also comes with a polling mechanism as part of the Survey Lists, but these surveys are complicated and require quite some time to configure.

The The User Poll Web Part allows users to setup a single topic poll within a few minutes.

The roadmap for the project is provided below.

Basic functionality

  • Poll settings are configured directly from the web part display or SharePoint lists
  • Publish and unpublish functionality

 

Project road map:

  • Release production build of The User Poll Web Part 2013
  • Automated security management on the poll response and answer list
  • Result view only web part
  • Add multiple HTML5 chart options (currently only horizontal bar)
  • Documentation

Contact me at tomas.floyd@outlook.com!!

https://sharepointsamurai.wordpress.com/

Client-side PowerShell for SharePoint Online and Office 365

SharePoint PowerShell is a PowerShell API for SharePoint 2010, 2013 and Online. Very usefull for Office 365 and private clouds where you don’t have access to the physical server.

Image

The API uses the Managed .NET Client-Side Object Model (CSOM) of SharePoint 2013. It’s a library of PowerShell Scripts and in it’s core it talks to the CSOM dll’s.

Examples :

Import-Module .\spps.psm1 

Initialize-SPPS -siteURL "https://example.sharepoint.com/" -online $true -username "sitecollectionadmin@example.onmicrosoft.com" -password "password"
Example
# Include SPPS
Import-Module .\spps.psm1 

# Setup SPPS
Initialize-SPPS -siteURL "https://example.sharepoint.com/" -online $true -username "sitecollectionadmin@example.onmicrosoft.com" -password "password"

# Activate Publishing Site Feature
Activate-Feature -featureId "f6924d36-2fa8-4f0b-b16d-06b7250180fa" -force $false -featureDefinitionScope "Site"

#Activate Publishing Web Feature
Activate-Feature -featureId "94c94ca6-b32f-4da9-a9e3-1f3d343d7ecb" -force $false -featureDefinitionScope "Web"

Features

  • Site Collection
    • Test connection
  • Site
    • Manage subsites
    • Manage permissions
  • Lists and Document Libraries
    • Create list and document library
    • Manage fields
    • Manage list and list item permissions
    • Upload files to document library (including folders)
    • Add items to a list with CSV
    • Add and remove list items
    • File check-in and check-out
  • Master pages
    • Set system master page
    • Set custom master page
  • Features
    • Activate features
  • Web Parts
    • Add Web Parts to page
  • Users and Groups
    • Set site permissions
    • Set list permissions
    • Set document permissions
    • Create SharePoint groups
    • Add users and groups to SharePoint groups
  • Solutions
    • Upload sandboxed solutions
    • Activate sandboxed solutions

    Contact me at tomas.floyd@outlook.com for this and more Azure,SharePoint & Office 365 Tools, Web Parts and Apps

SharePoint Development roles urgently needs to be filled at MS Gold Partner – Contact me now for more information (Sorry, No recruiters, i am filling private positions)

Senior SharePoint Developers needed urgently for MS Gold Partner in Sandton/Bryanston :

3 – 5 years of development experience.

2 year experience in SharePoint.

3 years experience in C#.

A minimum of 3 years experience in Visual Studio .NET 2005 – 2008.

A minimum of 3 years experience in ASP.NET , HTML web development.

A minimum of 3 years experience with Javascript.

A minimum of 3 years experience with Windows XP, Windows 2003 and Windows Vista.

A minimum of 3 years experience in relational database design and implementation with SQL Server
Advantageous (nice-to-have):

  • Windows SharePoint Server.
  • Microsoft Office SharePoint Server.
  • BizTalk
  • Web Analytics
  • Microsoft CRM
  • K2

Now available – A SharePoint XML Indexing Connector

Most organizations have several systems holding their data. Data from these systems must be indexable and made available for search on the common Internal Search portal.

While most of the different data silos are able to dump or export their full dataset as XML, SharePoint does not include an OOTB general purpose XML indexing connector.

The SharePoint Server Search Connector Framework is known to be overly complex, and documentation out there about this subject is very limited.

There are basically two types of custom search connectors for SharePoint 2010 that can be implemented; the .Net Assembly Connector and the Custom Connector. More details about the differences between them can be found here. Mainly, a Custom Connector is agnostic of external content types, whereas each .NET Assembly Connector is specific to one external content type, and whenever the external content type changes, the .Net Assembly Connector must be re-compiled and re-deployed. If the entity model of the external system is dynamic and is large scale a Custom Connector should be considered over the .Net Assembly Connector.

Also, a Custom Connector provides administration user interface integration, but a .NET Assembly Connector does not.

The XML File Indexing Connector

The XML File Indexing Connector that is presented here is a custom search indexing connector that can be used to crawl and index XML files. In this series of posts I am going to first show you how to install, setup and configure the connector. In future posts I will go into more implementation details where we’ll look into code to see how the connector is implemented and how you can customize it to suit specific needs.

This post is divided into the following sections:

  • Installing and deploying the connector
  • Creating a new Content Source using the connector
  • Using the Start Address of the Content Source to configure the connector
  • Automatic and dynamic generation of Crawled Properties from XML elements
  • Full Crawl vs. Incremental Crawl
  • Optimizations and considerations when crawling large XML files
  • Future plans

Installing and deploying the connector

The package that can be downloaded at the bottom of this post, includes the following components:

  1. model.xml: This is the BCS model file for the connector
  2. XmlFileConnector.dll: This is the DLL file of the connector
  3. The Folder XmlFileConnector: This includes the Visual Studio Solution of the connector

Follow these steps to install the connector:

  1. Install the XmlFileConnector.dll in the Global Assembly Cache on the SharePoint application server(s)

gacutil -i “XmlFileConnector.dll”

  1. Open the SharePoint 2010 Management Shell on the application server.
  2. At the command prompt, type the following command to get a reference to your FAST Content SSA.

$fastContentSSA = Get-SPEnterpriseSearchServiceApplication -Identity “FASTContent SSA”

  1. Add the following registry key to the application server

[HKEY_LOCAL_MACHINE]SOFTWAREMicrosoftOfficeServer14.0SearchSetupProtocolHandlersxmldoc

Set the value of the registry key to “OSearch14.ConnectorProtocolHandler.1”

  1. Add the new Search Crawl Custom Connector

New-SPEnterpriseSearchCrawlCustomConnector -SearchApplication $fastContentSSA –Protocol xmldoc -Name xmldoc -ModelFilePath “XmlFileConnectorModel.xml”

  1. Restart the SharePoint Server Search 14 service. At the command prompt run:

net stop osearch14

net start osearch14

8.  Create a new Crawled Property Category for the XML File Connector. Open the FAST Search Server 2010 for SharePoint Management Shell and run the following command:

New-FASTSearchMetadataCategory -Name “Custom XML Connector” -Propset “BCC9619B-BFBD-4BD6-8E51-466F9241A27A”

 Note that the Propset GUID must be the one specified above, since this GUID is hardcoded in the Connector code as the Crawled Properties Category which will receive discovered Crawled Properties.

Creating a new Content Source using the XML File Connector

  1. Using the Central Administration UI, on the Search Administration Page of the FAST Content SSA, click Content Sources, then New Content Source.
  • Type a name for the content source, and in Content Source Type, select Custom Repository.
  • In Type of Repository select xmldoc.

  • In Start Addresses, type the URLs for the folders that contain the XML files you want to index. The URL should be inserted in the following format:

  • xmldoc://hostname/folder_path/#x=:doc:id;;urielm=url;;titleelm=title#

    The following section describes the different parts of the Start Address.

    Using the Start Address of the Content Source to configure the connector

    The Start Address specified for the Content Source must be of the following format. The XML File Connector will read this Start Address and use them when crawling the XML content.

    xmldoc://hostname/folder_path/#x=:doc:id;;urielm=url;;titleelm=title#

    xmldoc

    xmldoc is the protocol corresponding to the registry key we added when installing the connector.

    //hostname/folder_path/

    //hostname/folder_path/ is the full path to the folder conaining the XML files to crawl.

    Exmaple: //demo2010a/c$/enwiki

    #x=doc:id;;urielm=url;;titleelm=title#

    #x=doc:id;;urielm=url;;titleelm=title# is the special part of the Start Address that is used as configuration values by the connector:

    x=:doc:id

    Defines which elements in the XML file to use as document and identifier elements. This configuration parameter is mandatory.

    For example, say a we have an XML file as follows:

    <feed> <document> <id>Some id</id> <title></title> <url>some url</id> <field1>Content for field1</field1> <field2>Content for field2</field2> </document> <document> ... </document> </feed>

    Here the value for the x configuration parameter would be x=:document:id

    urielm=url

    urielm=url defines which element in the XML file to use as the URL. This will end up as the URL of the document used by the FS4SP processing pipeline and will go into the ”url” managed property. This configuration parameter can be left out. In this case, the default URL of the document will be as follows: xmldoc://id/[id value]

    titleelm=title

    titleelm=title defines which element in the XML file to use as the Title. This will end up as the Title of the document, and the value of this element will go into the title managed property. This configuration parameter can be left out. If the parameter is left out, then the title of the document will be set to ”notitle”.

    Automatic and dynamic generation of Crawled Properties from XML elements

    The XML File Connector uses advanced BCS techniques to automatically Discover crawled properties from the content of the XML files.

    All elements in the XML docuemt will be created as crawled properties. This provides the ability to dynamically crawl any XML file, without the need to pre-define the properties of the entities in the BCS Model file, and re-deploy the model file for each change.

    This is defined in the BCS Model file on the XML Document entity. The TypeDescriptor element named DocumentProperties, defines an list of dynamic property names and values. The property names in this list will automatically be discovered by the BCS framework and corresponding crawled properties will automatically be created for each property.

    The following snippet  from the BCS Model file shows how this is configured:

      <TypeDescriptor Name="DocumentProperties" TypeName="XmlFileConnector.DocumentProperty[], XmlFileConnector, Version=1.0.0.0, Culture=neutral, PublicKeyToken=109e5afacbc0fbe2" IsCollection="true">        xxxxx          xxxx          xxxxxx              

    In addition to the ability to discover crawled properties automatically from the XML content, the XMl File Connector also creates a default property with the name “XMLContent”. This property contains the raw XML of the document being processed. This enables the use of the XML content in a custom Pipeline extensibility stage for further processing.

    Example

    Say that we have the following XML file to index.

     Wikipedia: Nobel Charitable Trust  http://en.wikipedia.org/wiki/Nobel_Charitable_Trust  The Nobel Charitable Trust (NCT) is a charity set up by members of the Swedish Nobel family, i.e.    Michael Nobel Energy Award	http://en.wikipedia.org/wiki/Nobel_Charitable_Trust#Michael_Nobel_Energy_Award  References	http://en.wikipedia.org/wiki/Nobel_Charitable_Trust#References        ...      

    When running the connector the first time; we see the following Crawled Properties discovered in the Custom XML Connector Crawled Properties Category.

    Full Crawl vs. Incremental Crawl

    The BCS Search Connector Framework is implemented in such a way that keeps track of all crawled content in the Crawl Log Database. For each search Content Source, a log of all document ids that have been crawled is stored. This log is used when running subsequent crawls of the content source, be it either a full or an incremental crawl.

    When running an incremental crawl, the BCS framework compares the list of document ids it received from the connector against the list of ids stored in the crawl log database. If there are any document ids within the crawl log database that have not not been received from the connector, the BCS framework assumes that these documents have been deleted, and will attemp to issue deletion operations to the search system. This will cause many inconsistencies, and will make it very difficult to keep both  the actual dataset and the BCS crawl log in sync.

    So, when running either a Full Crawl or an Incremental Crawl of the Content Source, the full dataset of the XML files must be available for traversal. If there are any items missing in subsequent crawls, the SharePoint crawler will consider those as subject for deletion, and og ahead and delete those from the search index.

    One possible work around to tackle this limitation and try to avoid (re)-generating the full data set each time something minor changes, would be to split the XML content into files of different known update frequences, where content that is known to have higher update rates is placed in separate input folders with separate configured Conetent Sources within the FAST Content SSA.

    Optimizations and considerations when crawling large XML files

    When the XML File Connector starts crawling content, it will load and parse found XML files one at the time. So, for each XML file found in the input directory, the whole XML file is read into memory and cached for all subsequent operations by the crawler until all items found in the XML file have been submitted to the indexing subsystem. In that case, the memory cache is cleared, and the next file is loaded and parsed until all files have been processed.

    For the reason just described, it is recommended not to have large single XML files, but split the content across multiple XML files, each consisting of a number of items the is reasonable and can be easily parsed and cached in memory.

    Contact me at tomas.floyd to find out more about this Connector and other custom developed SharePoint and Office 365 Web Parts and Apps!!

    Getting Started with Apps for Office : The Javascript API for Office

    This section briefly describes the subset of the JavaScript API for Office you can call from content and task pane apps. See Understanding the JavaScript API for Office for an overview of the features of the entire API, and Apps for Office code samples for additional examples.

    Before reading this section, use the links below to explore API diagrams that show the members of the API supported in content and task pane apps and the Office host applications that support these app types.

    Explore by app type: Explore by host application:
    Zoom into the Office object model for content apps Content apps

    ZoomIt

    Zoom into the object model for task pane apps Task pane apps

    ZoomIt

    Download the set of maps

    for each app type and host application.

    Zoom into the app object model for Excel Excel

    ZoomIt

    Zoom into the app object model for PowerPoint PowerPoint

    ZoomIt

    Zoom into the app object model for Project Project

    ZoomIt

    Zoom into the app object model for Word Word

    ZoomIt

    You can categorize the primary objects and methods supported by content and task pane apps as follows:

    1. Common objects shared with other apps for Office

      These objects include Office, Context, and AsyncResult. The Office object is the root object of the JavaScript API for Office. The Context object represents the app’s runtime environment. Both Office and Context are the fundamental objects for any app for Office. The AsyncResult object represents the results of an asynchronous operation, such as the data returned to the getSelectedDataAsync method, which reads what a user has selected in a document.

    2. The Document object

      The majority of the API available to content and task pane apps is exposed through the methods, properties, and events of the Document object. Using this subset of the API, your content or task pane app can perform the tasks described later in this topic.

      A content or task pane app can use the Office.context.document property to access the Document object, and through it, can access the key members of the API for working with data in documents, such as the Bindings and CustomXmlParts objects, and the getSelectedDataAsync, setSelectedDataAsync, and getFileAsync methods. The Document object also provides the mode property for determining whether a document is read-only or in edit mode, the url property to get the URL of the current document, and access to the Settings object. The Document object also supports adding event handlers for the SelectionChanged event, so you can detect when a user changes his or her selection in the document.

      A content or task pane app can access the Document object only after the DOM and run-time environment has been loaded, typically in the event handler for the Office.initialize event. For information about the flow of events when an app is initialized, and how to check that the DOM and runtime and loaded successfully, see Loading the DOM and runtime environment.

    3. Objects for working with specific features

      To work with specific features of the API, your content or task pane app can work with the following objects and methods:

      • Use the methods of the Bindings object to create or get bindings, and then work with their data using the methods and properties of the Binding object.
      • Use the CustomXmlParts, CustomXmlPart and associated objects to create and manipulate custom XML parts in Word documents.
      • Use the File and Slice objects to create a copy of the entire document, break it into chunks or “slices”, and then read or transmit the data in those slices.
      • Use the Settings object to save custom data, such as user preferences, and app state.

    Important: Some of the API members described in this topic aren’t supported across all Office applications that can host content and task pane apps. To determine which members are supported, see any of the following resources:

    For a high-level summary of the JavaScript API for Office support available across Office host applications, see the API support matrix in the “Understanding the JavaScript API for Office” topic.

    The following sections highlight the fundamental concepts for creating content and task pane apps for Word, Excel, PowerPoint, and Project. For more details about a concept, see the references at the end of the concept, and also the Additional resources section.

    You can read or write to the user’s current selection in a document, spreadsheet, or presentation. Depending on the host application for your app, you can specify the type of data structure to read or write as a parameter in the getSelectedDataAsync and setSelectedDataAsync methods of the Document object. For example, you can specify any type of data (text, HTML, tabular data, or Office Open XML) for Word, text and tabular data for Excel, and text for PowerPoint and Project. You can also create event handlers to detect changes to the user’s selection. The following example gets data from the selection as text using the getSelectedDataAsync method.

    Copy
    Office.context.document.getSelectedDataAsync(
        Office.CoercionType.Text, function (asyncResult) {
            if (asyncResult.status == Office.AsyncResultStatus.Failed) {
                write('Action failed. Error: ' + asyncResult.error.message);
            }
            else {
                write('Selected data: ' + asyncResult.value);
            }
        });
    
    // Function that writes to a div with id='message' on the page.
    function write(message){
        document.getElementById('message').innerText += message; 
    }

    For more details and examples, see Reading and writing data to the active selection in a document or spreadsheet.

    As described in the previous section, you can use the getSelectedDataAsync and setSelectedDataAsync methods to read or write to the user’s current selection in a document, spreadsheet, or presentation. However, if you would like to access the same region in a document across sessions of running your app without requiring the user to make a selection, you should first bind to that region. You can also subscribe to data and selection change events for that bound region.

    You can add a binding by using addFromNamedItemAsync, addFromPromptAsync, or addFromSelectionAsync methods of the Bindings object. These methods return an identifier that you can use to access data in the binding, or to subscribe to its data change or selection change events.

    The following is an example that adds a binding to the currently selected text in a document, by using the Bindings.addFromSelectionAsync method.

    Copy
    Office.context.document.bindings.addFromSelectionAsync(
        Office.BindingType.Text, { id: 'myBinding' }, function (asyncResult) {
        if (asyncResult.status == Office.AsyncResultStatus.Failed) {
            write('Action failed. Error: ' + asyncResult.error.message);
        } else {
            write('Added new binding with type: ' +
                asyncResult.value.type + ' and id: ' + asyncResult.value.id);
        }
    });
    
    // Function that writes to a div with id='message' on the page.
    function write(message){
        document.getElementById('message').innerText += message; 
    }

    For more details and examples, see Binding to regions in a document or spreadsheet.

    If your task pane app runs in PowerPoint or Word, you can use the Document.getFileAsync, File.getSliceAsync, and File.closeAsync methods to get an entire presentation or document.

    When you call Document.getFileAsync, you get a copy of the document in a File object. The File object provides access to the document in “chunks” represented as Slice objects. When you call getFileAsync, you can specify the file type (text or compressed Open Office XML format), and size of the slices (up to 4MB). To access the contents of the File object, you then call File.getSliceAsync which returns the raw data in the Slice.data property. If you specified compressed format, you will get the file data as a byte array. If you are transmitting the file to a web service, you can transform the compressed raw data to a base64-encoded string before submission. Finally, when you are finished getting slices of the file, use the File.closeAsync method to close the document.

    For more details, see how to get the whole document from an app for PowerPoint or Word.

    Using the Open Office XML file format and content controls, you can add custom XML parts to a Word document and bind elements in the XML parts to content controls in that document. When you open the document, Word reads and automatically populates bound content controls with data from the custom XML parts. Users can also write data into the content controls, and when the user saves the document, the data in the controls will be saved to the bound XML parts. Task pane apps for Word, can use the Document.customXmlParts property, CustomXmlParts, CustomXmlPart, and CustomXmlNode objects to read and write data dynamically to the document.

    Custom XML parts may be associated with namespaces. To get data from custom XML parts in a namespace, use the CustomXmlParts.getByNamespaceAsync method.

    You can also use the CustomXmlParts.getByIdAsync method to access custom XML parts by their GUIDs. After getting a custom XML part, use the CustomXmlPart.getXmlAsync method to get the XML data.

    To add a new custom XML part to a document, use the Document.customXmlParts property to get the custom XML parts that are in the document, and call the CustomXmlParts.addAsync method.

    For detailed information about how to work with custom XML parts with a task pane app, see Creating Better Apps for Word with Office Open XML.

    Often you need to save custom data for your app, such as a user’s preferences or the app’s state, and access that data the next time the app is opened. You can use common web programming techniques to save that data, such as browser cookies or HTML 5 web storage. Alternatively, if your app runs in Excel, PowerPoint, or Word, you can use the methods of the Document.Settings object. Data created with the Settings object is stored in the spreadsheet, presentation, or document that the app was inserted into and saved with. This data is available to only the app that created it.

    To avoid roundtrips to the server where the document is stored, data created with the Settings object is managed in memory at runtime. Previously saved settings data is loaded into memory when the app is initialized, and changes to that data are only saved back to the document when you call the Settings.saveAsync method. Internally, the data is stored in a serialized JSON object as name/value pairs. You use the get, set, and remove methods of the Settings object, to read, write, and delete items from the in-memory copy of the data. The following line of code shows how to create a setting named themeColor and set its value to ‘green’.

    Copy
    Office.context.document.settings.set('themeColor', 'green');

    Because settings data created or deleted with the set and remove methods is acting on an in-memory copy of the data, you must call saveAsync to persist changes to settings data into the document your app is working with.

    For more details about working with custom data using the methods of the Settings object, see Persisting app state and settings.

    If your task pane app runs in Project, your app can read data from some of the project fields, resource, and task fields in the active project. To do that, you use the methods and events of the ProjectDocument object which extends the Document object to provide additional Project-specific functionality.

    For examples of reading Project data, see How to: Create your first task pane app for Project 2013 by using a text editor

    Your app uses the Permissions element in its manifest to request permission to access the level of functionality it requires from the JavaScript API for Office. For example, if your app requires read/write access to the document, its manifest must specify ReadWriteDocument as the text value in its Permissions element. Because permissions exist to protect a user’s privacy and security, as a best practice you should request the minimum level of permissions it needs for its features. The following example shows how to request the ReadDocument permission in a task pane’s manifest.

    Copy
    <!--?xml version="1.0" encoding="utf-8"?>
    
    …
      ReadDocument
    …

    Figure 1 shows the 5 levels of permissions that you can specify for a task pane app. For more information, see Requesting permissions for task pane apps.

    Figure 1. The 5-level permission model for task pane apps

    Levels of permissions for task pane apps

    Figure 2 shows the 4 levels of permissions available to a content app. For more information, see Requesting permissions for content apps.

    Figure 2. The 4-level permission model for content apps

    Levels of permissions for content apps

    Using OpenXML to build Office 365 Apps (OOXML)

    If you’re building apps for Office to run in Word, you might already know that the JavaScript API for Office (Office.js) offers several formats for reading and writing document content. These are called coercion types, and they include plain text, tables, HTML, and Office Open XML (OOXML).

    So what are your options when you need to add rich content to a document, such as images, formatted tables, charts, or even just formatted text?

    You can use HTML for inserting some types of rich content, such as pictures. Depending on your scenario, there can be drawbacks to HTML coercion, such as limitations in the formatting and positioning options available to your content.

    Because Office Open XML is the language in which Word documents (such as .docx and .dotx) are written, you can insert virtually any type of content that a user can add to a Word document, with virtually any type of formatting the user can apply. Determining the Office Open XML markup you need to get it done is easier than you might think.

    Note Note

    Office Open XML is also the language behind PowerPoint and Excel (and, as of Office 2013, Visio) documents. However, currently, you can coerce content as Office Open XML only in apps for Office created for Word. For more information about Office Open XML, including the complete language reference documentation, see Additional resources.

    To begin, take a look at some of the content types you can insert using OOXML coercion.

    Download the code sample Loading and Writing Office Open XML, which contains the Office Open XML markup and Office.js code required for inserting any of the following examples into Word.

    Note Note

    Throughout this article, the terms content types and rich content refer to the types of rich content you can insert into a Word document.

    Figure 1. Text with direct formatting.

    Text with direct formatting applied.

    You can use direct formatting to specify exactly what the text will look like regardless of existing formatting in the user’s document.

    Figure 2. Text formatted using a style.

    Text formatted with paragraph style.

    You can use a style to automatically coordinate the look of text you insert with the user’s document.

    Figure 3. A simple image.

    Image of a logo.

    You can use the same method for inserting any Office-supported image format.

    Figure 4. An image formatted using picture styles and effects.

    Formatted image in Word 2013.

    Adding high quality formatting and effects to your images requires much less markup than you might expect.

    Figure 5. A content control.

    Text within a bound content control.

    You can use content controls with your app to add content at a specified (bound) location rather than at the selection.

    Figure 6. A text box with WordArt formatting.

    Text formatted with WordArt text effects.

    Text effects are available in Word for text inside a text box (as shown here) or for regular body text.

    Figure 7. A shape.

    An Office 2013 drawing shape in Word 2013.

    You can insert built-in or custom drawing shapes, with or without text and formatting effects.

    Figure 8. A table with direct formatting.

    A formatted table in Word 2013.

    You can include text formatting, borders, shading, cell sizing, or any table formatting you need.

    Figure 9. A table formatted using a table style.

    A formatted table in Word 2013.

    You can use built-in or custom table styles just as easily as using a paragraph style for text.

    Figure 10. A SmartArt diagram.

    A dynamic SmartArt diagram in Word 2013.

    Office 2013 offers a wide array of SmartArt diagram layouts (and you can use Office Open XML to create your own).

    Figure 11. A chart.

    A chart in Word 2013.

    You can insert Excel charts as live charts in Word documents, which also means you can use them in your app for Word.

    As you can see by the preceding examples, you can use OOXML coercion to insert essentially any type of content that a user can insert into their own document.

    There are two simple ways to get the Office Open XML markup you need. Either add your rich content to an otherwise blank Word 2013 document and then save the file in Word XML Document format or use a test app with the getSelectedDataAsync method to grab the markup. Both approaches provide essentially the same result.

    Note Note

    An Office Open XML document is actually a compressed package of files that represent the document contents. Saving the file in the Word XML Document format gives you the entire Office Open XML package flattened into one XML file, which is also what you get when using getSelectedDataAsync to retrieve the Office Open XML markup.

    If you save the file to an XML format from Word, note that there are two options under the Save as Type list in the Save As dialog box for .xml format files. Be sure to choose Word XML Document and not the Word 2003 option.

    Download the code sample named Get, Set, and Edit Office Open XML, which you can use as a tool to retrieve and test your markup.

    So is that all there is to it? Well, not quite. Yes, for many scenarios, you could use the full, flattened Office Open XML result you see with either of the preceding methods and it would work. The good news is that you probably don’t need most of that markup.

    If you’re one of the many app developers seeing Office Open XML markup for the first time, trying to make sense of the massive amount of markup you get for the simplest piece of content might seem overwhelming, but it doesn’t have to be.

    In this topic, we’ll use some common scenarios we’ve been hearing from the apps for Office developer community to show you techniques for simplifying Office Open XML for use in your app. We’ll explore the markup for some types of content shown earlier along with the information you need for minimizing the Office Open XML payload. We’ll also look at the code you need for inserting rich content into a document at the active selection and how to use Office Open XML with the bindings object to add or replace content at specified locations.

    When you use getSelectedDataAsync to retrieve the Office Open XML for a selection of content (or when you save the document in Word XML Document format), what you’re getting is not just the markup that describes your selected content; it’s an entire document with many options and settings that you almost certainly don’t need. In fact, if you use that method from a document that contains a task pane app, the markup you get even includes your task pane.

    Even a simple Word document package includes parts for document properties, styles, theme (formatting settings), web settings, fonts, and then some—in addition to parts for the actual content.

    For example, say that you want to insert just a paragraph of text with direct formatting, as shown earlier in Figure 1. When you grab the Office Open XML for the formatted text using using getSelectedDataAsync, you see a large amount of markup. That markup includes a package element that represents an entire document, which contains several parts (commonly referred to as document parts or, in the Office Open XML, as package parts), as you see listed in Figure 13. Each part represents a separate file within the package.

    Tip Tip

    You can edit Office Open XML markup in a text editor like Notepad. If you open it in Visual Studio 2012, you can use Edit >Advanced > Format Document (Ctrl+K, Ctrl+D) to format the package for easier editing. Then you can collapse or expand document parts or sections of them, as shown in Figure 12, to more easily review and edit the content of the Office Open XML package. Each document part begins with a pkg:part tag.

    Figure 12. Collapse and expand package parts for easier editing in Visual Studio 2012.

    Office Open XML code snippet for a package part.

    Figure 13. The parts included in a basic Word Office Open XML document package.

    Office Open XML code snippet for a package part.

    With all that markup, you might be surprised to discover that the only elements you actually need to insert the formatted text example are pieces of the .rels part and the document.xml part.

    Note Note

    The two lines of markup above the package tag (the XML declarations for version and Office program ID) are assumed when you use the OOXML coercion type, so you don’t need to include them. Keep them if you want to open your edited markup as a Word document to test it.

    Several of the other types of content shown at the start of this topic require additional parts as well (beyond those shown in Figure 13), and we’ll address those later in this topic. Meanwhile, since you’ll see most of the parts shown in Figure 13 in the markup for any Word document package, here’s a quick summary of what each of these parts is for and when you need it:

    • Inside the package tag, the first part is the .rels file, which defines relationships between the top-level parts of the package (these are typically the document properties, thumbnail (if any), and main document body). Some of the content in this part is always required in your markup because you need to define the relationship of the main document part (where your content resides) to the document package.

    • The document.xml.rels part defines relationships for additional parts required by the document.xml (main body) part, if any.

    Important note Important

    The .rels files in your package (such as the top-level .rels, document.xml.rels, and others you may see for specific types of content) are an extremely important tool that you can use as a guide for helping you quickly edit down your Office Open XML package. To learn more about how to do this, see Creating your own markup: best practices later in this topic.

    • The document.xml part is the content in the main body of the document. You need elements of this part, of course, since that’s where your content appears. But, you don’t need everything you see in this part. We’ll look at that in more detail later.

    • Many parts are automatically ignored by the Set methods when inserting content into a document using OOXML coercion, so you might as well remove them. These include the theme1.xml file (the document’s formatting theme), the document properties parts (core, app, and thumbnail), and setting files (including settings, webSettings, and fontTable).

    • In the Figure 1 example, text formatting is directly applied (that is, each font and paragraph formatting setting applied individually). But, if you use a style (such as if you want your text to automatically take on the formatting of the Heading 1 style in the destination document) as shown earlier in Figure 2, then you would need part of the styles.xml part as well as a relationship definition for it. For more information, see the topic section Adding objects that use additional Office Open XML parts.

    Let’s take a look at the minimal Office Open XML markup required for the formatted text example shown in Figure 1 and the JavaScript required for inserting it at the active selection in the document.

    Simplified Office Open XML markup

    We’ve edited the Office Open XML example shown here, as described in the preceding section, to leave just required document parts and only required elements within each of those parts. We’ll walk through how to edit the markup yourself (and explain a bit more about the pieces that remain here) in the next section of the topic.

    Copy
    <pkg:package xmlns:pkg="http://schemas.microsoft.com/office/2006/xmlPackage">
      <pkg:part pkg:name="/_rels/.rels" pkg:contentType="application/vnd.openxmlformats-package.relationships+xml" pkg:padding="512">
        <pkg:xmlData>
          <Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
            <Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="word/document.xml"/>
          </Relationships>
        </pkg:xmlData>
      </pkg:part>
      <pkg:part pkg:name="/word/document.xml" pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml">
        <pkg:xmlData>
          <w:document xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" >
            <w:body>
              <w:p>
                <w:pPr>
                  <w:spacing w:before="360" w:after="0" w:line="480" w:lineRule="auto"/>
                  <w:rPr>
                    <w:color w:val="70AD47" w:themeColor="accent6"/>
                    <w:sz w:val="28"/>
                  </w:rPr>
                </w:pPr>
                <w:r>
                  <w:rPr>
                    <w:color w:val="70AD47" w:themeColor="accent6"/>
                    <w:sz w:val="28"/>
                  </w:rPr>
                  <w:t>This text has formatting directly applied to achieve its font size, color, line spacing, and paragraph spacing.</w:t>
                </w:r>
              </w:p>
            </w:body>
          </w:document>
        </pkg:xmlData>
      </pkg:part>
    </pkg:package>
    
    NoteNote

    If you add the markup shown here to an XML file along with the XML declaration tags for version and mso-application at the top of the file (shown in Figure 13), you can open it in Word as a Word document. Or, without those tags, you can still open it using File> Open in Word. You’ll see Compatibility Mode on the title bar in Word 2013, because you removed the settings that tell Word this is a 2013 document. Since you’re adding this markup to an existing Word 2013 document, that won’t affect your content at all.

    JavaScript for using setSelectedDataAsync

    Once you save the preceding Office Open XML as an XML file that’s accessible from your solution, you can use the following function to set the formatted text content in the document using OOXML coercion.

    In this function, notice that all but the last line are used to get your saved markup for use in the setSelectedDataAsync method call at the end of the function. setSelectedDataASync requires only that you specify the content to be inserted and the coercion type.

    Note Note

    Replace yourXMLfilename with the name and path of the XML file as you’ve saved it in your solution. If you’re not sure where to include XML files in your solution or how to reference them in your code, see the Loading and Writing Office Open XML code sample for examples of that and a working example of the markup and JavaScript shown here.

    Copy
    function writeContent() {
        var myOOXMLRequest = new XMLHttpRequest();
        var myXML;
        myOOXMLRequest.open('GET', ‘yourXMLfilename’, false);
        myOOXMLRequest.send();
        if (myOOXMLRequest.status === 200) {
            myXML = myOOXMLRequest.responseText;
        }
        Office.context.document.setSelectedDataAsync(myXML, { coercionType: 'ooxml' });
    }
    

    Let’s take a closer look at the markup you need to insert the preceding formatted text example.

    For this example, start by simply deleting all document parts from the package other than .rels and document.xml. Then, we’ll edit those two required parts to simplify things further.

    Important note Important

    Use the .rels parts as a map to quickly gauge what’s included in the package and determine what parts you can delete completely (that is, any parts not related to or referenced by your content). Remember that every document part must have a relationship defined in the package and those relationships appear in the .rels files. So you should see all of them listed in either .rels, document.xml.rels, or a content-specific .rels file.

    The following markup shows the required .rels part before editing. Since we’re deleting the app and core document property parts, and the thumbnail part, we need to delete those relationships from .rels as well. Notice that this will leave only the relationship (with the relationship ID “rID1” in the following example) for document.xml.

    Copy
      <pkg:part pkg:name="/_rels/.rels" pkg:contentType="application/vnd.openxmlformats-package.relationships+xml" pkg:padding="512">
        <pkg:xmlData>
          <Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
            <Relationship Id="rId3" Type="http://schemas.openxmlformats.org/package/2006/relationships/metadata/core-properties" Target="docProps/core.xml"/>
            <Relationship Id="rId2" Type="http://schemas.openxmlformats.org/package/2006/relationships/metadata/thumbnail" Target="docProps/thumbnail.emf"/>
            <Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="word/document.xml"/>
            <Relationship Id="rId4" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/extended-properties" Target="docProps/app.xml"/>
          </Relationships>
        </pkg:xmlData>
      </pkg:part>
    
    Important noteImportant

    Remove the relationships (that is, the <Relationship…> tag) for any parts that you completely remove from the package. Including a part without a corresponding relationship, or excluding a part and leaving its relationship in the package, will result in an error.

    The following markup shows the document.xml part—which includes our sample formatted text content—before editing.

    Copy
    <pkg:part pkg:name="/word/document.xml" pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml">
        <pkg:xmlData>
          <w:document mc:Ignorable="w14 w15 wp14" xmlns:wpc="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml" xmlns:w15="http://schemas.microsoft.com/office/word/2012/wordml" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:wpi="http://schemas.microsoft.com/office/word/2010/wordprocessingInk" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape">
            <w:body>
              <w:p>
                <w:pPr>
                  <w:spacing w:before="360" w:after="0" w:line="480" w:lineRule="auto"/>
                  <w:rPr>
                    <w:color w:val="70AD47" w:themeColor="accent6"/>
                    <w:sz w:val="28"/>
                  </w:rPr>
                </w:pPr>
                <w:r>
                  <w:rPr>
                    <w:color w:val="70AD47" w:themeColor="accent6"/>
                    <w:sz w:val="28"/>
                  </w:rPr>
                  <w:t>This text has formatting directly applied to achieve its font size, color, line spacing, and paragraph spacing.</w:t>
                </w:r>
                <w:bookmarkStart w:id="0" w:name="_GoBack"/>
                <w:bookmarkEnd w:id="0"/>
              </w:p>
              <w:p/>
              <w:sectPr>
                <w:pgSz w:w="12240" w:h="15840"/>
                <w:pgMar w:top="1440" w:right="1440" w:bottom="1440" w:left="1440" w:header="720" w:footer="720" w:gutter="0"/>
                <w:cols w:space="720"/>
              </w:sectPr>
            </w:body>
          </w:document>
        </pkg:xmlData>
      </pkg:part>
    

    Since document.xml is the primary document part where you place your content, let’s take a quick walk through that part. (Figure 14, which follows this list, provides a visual reference to show how some of the core content and formatting tags explained here relate to what you see in a Word document.)

    • The opening w:document tag includes several namespace (xmlns) listings. Many of those namespaces refer to specific types of content and you only need them if they’re relevant to your content.

      Notice that the prefix for the tags throughout a document part refers back to the namespaces. In this example, the only prefix used in the tags throughout the document.xml part is w:, so the only namespace that we need to leave in the opening w:document tag is xmlns:w.

    TipTip

    If you’re editing your markup in Visual Studio 2012, after you delete namespaces in any part, look through all tags of that part. If you’ve removed a namespace that’s required for your markup, you’ll see a red squiggly underline on the relevant prefix for affected tags. Also note that, if you remove the xmlns:mc namespace, you must also remove the mc:Ignorable attribute that precedes the namespace listings.

    • Inside the opening body tag, you see a paragraph tag (w:p), which includes our sample content for this example.

    • The w:pPr tag includes properties for directly-applied paragraph formatting, such as space before or after the paragraph, paragraph alignment, or indents. (Direct formatting refers to attributes that you apply individually to content rather than as part of a style.) This tag also includes direct font formatting that’s applied to the entire paragraph, in a nested w:rPr (run properties) tag, which contains the font color and size set in our sample.

    NoteNote

    You might notice that font sizes and some other formatting settings in Word Office Open XML markup look like they’re double the actual size. That’s because paragraph and line spacing, as well some section formatting properties shown in the preceding markup, are specified in twips (one-twentieth of a point).

    Depending on the types of content you work with in Office Open XML, you may see several additional units of measure, including English Metric Units (914,400 EMUs to an inch), which are used for some Office Art (drawingML) values and 100,000 times actual value, which is used in both drawingML and PowerPoint markup. PowerPoint also expresses some values as 100 times actual and Excel commonly uses actual values.

    • Within a paragraph, any content with like properties is included in a run (w:r), such as is the case with the sample text. Each time there’s a change in formatting or content type, a new run starts. (That is, if just one word in the sample text was bold, it would be separated into its own run.) In this example, the content includes just the one text run.

      Notice that, because the formatting included in this sample is font formatting (that is, formatting that can be applied to as little as one character), it also appears in the properties for the individual run.

    • Also notice the tags for the hidden “_GoBack” bookmark (w:bookmarkStart and w:bookmarkEnd), which appear in Word 2013 documents by default. You can always delete the start and end tags for the GoBack bookmark from your markup.

    • The last piece of the document body is the w:sectPr tag, or section properties. This tag includes settings such as margins and page orientation. The content you insert using setSelectedDataAsync will take on the active section properties in the destination document by default. So, unless your content includes a section break (in which case you’ll see more than one w:sectPr tag), you can delete this tag.

    Figure 14. How common tags in document.xml relate to the content and layout of a Word document.

    Office Open XML elements in a Word document.

    TipTip

    In markup you create, you might see another attribute in several tags that includes the characters w:rsid, which you don’t see in the examples used in this topic. These are revision identifiers. They’re used in Word for the Combine Documents feature and they’re on by default. You’ll never need them in markup you’re inserting with your app and turning them off makes for much cleaner markup. You can easily remove existing RSID tags or disable the feature (as described in the following procedure) so that they’re not added to your markup for new content.

    Be aware that if you use the co-authoring capabilities in Word (such as the ability to simultaneously edit documents with others), you should enable the feature again when finished generating the markup for your app.

    To turn off RSID attributes in Word for documents you create going forward, do the following:

    1. In Word 2013, choose File and then choose Options.

    2. In the Word Options dialog box, choose Trust Center and then choose Trust Center Settings.

    3. In the Trust Center dialog box, choose Privacy Options and then disable the setting Store Random Number to Improve Combine Accuracy.

    To remove RSID tags from an existing document, try the following shortcut with the document open in Word:

    1. With your insertion point in the main body of the document, press Ctrl+Home to go to the top of the document.

    2. On the keyboard, press Spacebar, Delete, Spacebar. Then, save the document.

    After removing the majority of the markup from this package, we’re left with the minimal markup that needs to be inserted for the sample, as shown in the preceding section.

    Several types of rich content require only the .rels and document.xml components shown in the preceding example, including content controls, Office drawing shapes and text boxes, and tables (unless a style is applied to the table). In fact, you can reuse the same edited package parts and swap out just the <body> content in document.xml for the markup of your content.

    To check out the Office Open XML markup for the examples of each of these content types shown earlier in Figures 5 through 8, explore the Loading and Writing Office Open XML code sample referenced in the Overview section.

    Before we move on, let’s take a look at differences to note for a couple of these content types and how to swap out the pieces you need.

    Understanding drawingML markup (Office graphics) in Word: What are fallbacks?

    If the markup for your shape or text box looks far more complex than you would expect, there is a reason for it. With the release of Office 2007, we saw the introduction of the Office Open XML Formats as well as the introduction of a new Office graphics engine that PowerPoint and Excel fully adopted. In the 2007 release, Word only incorporated part of that graphics engine—adopting the updated Excel charting engine, SmartArt graphics, and advanced picture tools. For shapes and text boxes, Word 2007 continued to use legacy drawing objects (VML). It was in the 2010 release that Word took the additional steps with the graphics engine to incorporate updated shapes and drawing tools.

    So, to support shapes and text boxes in Office Open XML Format Word documents when opened in Word 2007, shapes (including text boxes) require fallback VML markup.

    Typically, as you see for the shape and text box examples included in the Loading and Writing Office Open XML code sample, the fallback markup can be removed. Word 2013 automatically adds missing fallback markup to shapes when a document is saved. However, if you prefer to keep the fallback markup to ensure that you’re supporting all user scenarios, there’s no harm in retaining it.

    Note also that, if you have grouped drawing objects included in your content, you’ll see additional (and apparently repetitive) markup, but this must be retained. Portions of the markup for drawing shapes are duplicated when the object is included in a group.

    Important note Important

    When working with text boxes and drawing shapes, be sure to check namespaces carefully before removing them from document.xml. (Or, if you’re reusing markup from another object type, be sure to add back any required namespaces you might have previously removed from document.xml.) A substantial portion of the namespaces included by default in document.xml are there for drawing object requirements.

    Note about graphic positioning

    In the code samples Loading and Writing Office Open XMLand Get, Set, and Edit Office Open XML, the text box and shape are setup using different types of text wrapping and positioning settings. (Also be aware that the image examples in those code samples are setup using in line with text formatting, which positions a graphic object on the text baseline.)

    The shape in those code samples is positioned relative to the right and bottom page margins. Relative positioning lets you more easily coordinate with a user’s unknown document setup because it will adjust to the user’s margins and run less risk of looking awkward because of paper size, orientation, or margin settings. To retain relative positioning settings when you insert a graphic object, you must retain the paragraph mark (w:p) in which the positioning (known in Word as an anchor) is stored. If you insert the content into an existing paragraph mark rather than including your own, you may be able to retain the same initial visual, but many types of relative references that enable the positioning to automatically adjust to the user’s layout may be lost.

    Working with content controls

    Content controls are an important feature in Word 2013 that can greatly enhance the power of your app for Word in multiple ways, including giving you the ability to insert content at designated places in the document rather than only at the selection.

    In Word, find content controls on the Developer tab of the ribbon, as shown here in Figure 15.

    Figure 15. The Controls group on the Developer tab in Word.

    Content Controls group on the Word 2013 ribbon.

    Types of content controls in Word include rich text, plain text, picture, building block gallery, check box, dropdown list, combo box, date picker, and repeating section.

    • Use the Properties command, shown in Figure 15, to edit the title of the control and to set preferences such as hiding the control container.

    • Enable Design Mode to edit placeholder content in the control.

    If your app works with a Word template, you can include controls in that template to enhance the behavior of the content. You can also use XML data binding in a Word document to bind content controls to data, such as document properties, for easy form completion or similar tasks. (Find controls that are already bound to built-in document properties in Word on the Insert tab, under Quick Parts.)

    When you use content controls with your app, you can also greatly expand the options for what your app can do using a different type of binding. You can bind to a content control from within the app and then write content to the binding rather than to the active selection.

    Note Note

    Don’t confuse XML data binding in Word with the ability to bind to a control via your app. These are completely separate features. However, you can include named content controls in the content you insert via your app using OOXML coercion and then use code in the app to bind to those controls.

    Also be aware that both XML data binding and Office.js can interact with custom XML parts in your app—so it is possible to integrate these powerful tools. To learn about working with custom XML parts in the Office JavaScript API, see the Additional resources section of this topic.

    Working with bindings in your Word app is covered in the next section of the topic. First, let’s take a look at an example of the Office Open XML required for inserting a rich text content control that you can bind to using your app.

    Important note Important

    Rich text controls are the only type of content control you can use to bind to a content control from within your app.

    Copy
    <pkg:package xmlns:pkg="http://schemas.microsoft.com/office/2006/xmlPackage">
      <pkg:part pkg:name="/_rels/.rels" pkg:contentType="application/vnd.openxmlformats-package.relationships+xml" pkg:padding="512">
        <pkg:xmlData>
          <Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
            <Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="word/document.xml"/>
          </Relationships>
        </pkg:xmlData>
      </pkg:part>
      <pkg:part pkg:name="/word/document.xml" pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml">
        <pkg:xmlData>
          <w:document xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w15="http://schemas.microsoft.com/office/word/2012/wordml" >
            <w:body>
              <w:p/>
              <w:sdt>
                  <w:sdtPr>
                    <w:alias w:val="MyContentControlTitle"/>
                    <w:id w:val="1382295294"/>
                    <w15:appearance w15:val="hidden"/>
                    <w:showingPlcHdr/>
                  </w:sdtPr>
                  <w:sdtContent>
                    <w:p>
                      <w:r>
                      <w:t>[This text is inside a content control that has its container hidden. You can bind to a content control to add or interact with content at a specified location in the document.]</w:t>
                    </w:r>
                    </w:p>
                  </w:sdtContent>
                </w:sdt>
              </w:body>
          </w:document>
        </pkg:xmlData>
      </pkg:part>
     </pkg:package>
    

    As already mentioned, content controls—like formatted text—don’t require additional document parts, so only edited versions of the .rels and document.xml parts are included here.

    The w:sdt tag that you see within the document.xml body represents the content control. If you generate the Office Open XML markup for a content control, you’ll see that several attributes have been removed from this example, including the tag and document part properties. Only essential (and a couple of best practice) elements have been retained, including the following:

    • The alias is the title property from the Content Control Properties dialog box in Word. This is a required property (representing the name of the item) if you plan to bind to the control from within your app.

    • The unique id is a required property. If you bind to the control from within your app, the ID is the property the binding uses in the document to identify the applicable named content control.

    • The appearance attribute is used to hide the control container, for a cleaner look. This is a new feature in Word 2013, as you see by the use of the w15 namespace. Because this property is used, the w15 namespace is retained at the start of the document.xml part.

    • The showingPlcHdr attribute is an optional setting that sets the default content you include inside the control (text in this example) as placeholder content. So, if the user clicks or taps in the control area, the entire content is selected rather than behaving like editable content in which the user can make changes.

    • Although the empty paragraph mark (<w:p/>) that precedes the sdt tag is not required for adding a content control (and will add vertical space above the control in the Word document), it ensures that the control is placed in its own paragraph. This may be important, depending upon the type and formatting of content that will be added in the control.

    • If you intend to bind to the control, the default content for the control (what’s inside the sdtContent tag) must include at least one complete paragraph (as in this example), in order for your binding to accept multi-paragraph rich content.

    NoteNote

    The document part attribute that was removed from this sample w:sdt tag may appear in a content control to reference a separate part in the package where placeholder content information can be stored (parts located in a glossary directory in the Office Open XML package). Although document part is the term used for XML parts (that is, files) within an Office Open XML package, the term document parts as used in the sdt property refers to the same term in Word that is used to describe some content types including building blocks and document property quick parts (for example, built-in XML data-bound controls). If you see parts under a glossary directory in your Office Open XML package, you may need to retain them if the content you’re inserting includes these features. For a typical content control that you intend to use to bind to from your app, they’re not required. Just remember that, if you do delete the glossary parts from the package, you must also remove the document part attribute from the w:sdt tag.

    The next section will discuss how to create and use bindings in your Word app.

    We’ve already looked at how to insert content at the active selection in a Word document. If you bind to a named content control that’s in the document, you can insert any of the same content types into that control.

    So when might you want to use this approach?

    • When you need to add or replace content at specified locations in a template, such as to populate portions of the document from a database

    • When you want the option to replace content that you’re inserting at the active selection, such as to provide design element options to the user

    • When you want the user to add data in the document that you can access for use with your app, such as to populate fields in the task pane based upon information the user adds in the document

    Download the code sample Add and Populate a Binding in Word , which provides a working example of how to insert and bind to a content control, and how to populate the binding.

    Add and bind to a named content control

    As you examine the JavaScript that follows, consider these requirements:

    • As previously mentioned, you must use a rich text content control in order to bind to the control from your Word app.

    • The content control must have a name (this is the Title field in the Content Control Properties dialog box, which corresponds to the Alias tag in the Office Open XML markup). This is how the code identifies where to place the binding.

    • You can have several named controls and bind to them as needed. Use a unique content control name, unique content control ID, and a unique binding ID.

    Copy
    function addAndBindControl() {
            Office.context.document.bindings.addFromNamedItemAsync("MyContentControlTitle", "text", { id: 'myBinding' }, function (result) {
                if (result.status == "failed") {
                    if (result.error.message == "The named item does not exist.")
                        var myOOXMLRequest = new XMLHttpRequest();
                        var myXML;
                        myOOXMLRequest.open('GET', '../../Snippets_BindAndPopulate/ContentControl.xml', false);
                        myOOXMLRequest.send();
                        if (myOOXMLRequest.status === 200) {
                            myXML = myOOXMLRequest.responseText;
                        }
                        Office.context.document.setSelectedDataAsync(myXML, { coercionType: 'ooxml' }, function (result) {
                            Office.context.document.bindings.addFromNamedItemAsync("MyContentControlTitle", "text", { id: 'myBinding' });
                        });
                }
                });
            }
    

    The code shown here takes the following steps:

    • Attempts to bind to the named content control, using addFromNamedItemAsync.

      Take this step first if there is a possible scenario for your app where the named control could already exist in the document when the code executes. For example, you’ll want to do this if the app was inserted into and saved with a template that’s been designed to work with the app, where the control was placed in advance. You also need to do this if you need to bind to a control that was placed earlier by the app.

    • The callback in the first call to the addFromNamedItemAsync method checks the status of the result to see if the binding failed because the named item doesn’t exist in the document (that is, the content control named MyContentControlTitle in this example). If so, the code adds the control at the active selection point (using setSelectedDataAsync) and then binds to it.

    NoteNote

    As mentioned earlier and shown in the preceding code, the name of the content control is used to determine where to create the binding. However, in the Office Open XML markup, the code adds the binding to the document using both the name and the ID attribute of the content control.

    After code execution, if you examine the markup of the document in which your app created bindings, you’ll see two parts to each binding. In the markup for the content control where a binding was added (in document.xml), you’ll see the attribute <w15:webExtensionLinked/>.

    In the document part named webExtensions1.xml, you’ll see a list of the bindings you’ve created. Each is identified using the binding ID and the ID attribute of the applicable control, such as the following—where the appref attribute is the content control ID:<we:binding id=”myBinding” type=”text” appref=”1382295294″/>.

    Important noteImportant

    You must add the binding at the time you intend to act upon it. Don’t include the markup for the binding in the Office Open XML for inserting the content control because the process of inserting that markup will strip the binding.

    Populate a binding

    The code for writing content to a binding is similar to that for writing content to a selection.

    Copy
    function populateBinding(filename) {
            var myOOXMLRequest = new XMLHttpRequest();
            var myXML;
            myOOXMLRequest.open('GET', filename, false);
                myOOXMLRequest.send();
                if (myOOXMLRequest.status === 200) {
                    myXML = myOOXMLRequest.responseText;
                }
                Office.select("bindings#myBinding").setDataAsync(myXML, { coercionType: 'ooxml' });
            }
    

    As with setSelectedDataAsync, you specify the content to be inserted and the coercion type. The only additional requirement for writing to a binding is to identify the binding by ID. Notice how the binding ID used in this code (bindings#myBinding) corresponds to the binding ID established (myBinding) when the binding was created in the previous function.

    NoteNote

    The preceding code is all you need whether you are initially populating or replacing the content in a binding. When you insert a new piece of content at a bound location, the existing content in that binding is automatically replaced. Check out an example of this in the previously-referenced code sample Add and Populate a Binding in Word, which provides two separate content samples that you can use interchangeably to populate the same binding.

    Many types of content require additional document parts in the Office Open XML package, meaning that they either reference information in another part or the content itself is stored in one or more additional parts and referenced in document.xml.

    For example, consider the following:

    • Content that uses styles for formatting (such as the styled text shown earlier in Figure 2 or the styled table shown in Figure 9) requires the styles.xml part.

    • Images (such as those shown in Figures 3 and 4) include the binary image data in one (and sometimes two) additional parts.

    • SmartArt diagrams (such as the one shown in Figure 10) require multiple additional parts to describe the layout and content.

    • Charts (such as the one shown in Figure 11) require multiple additional parts, including their own relationship (.rels) part.

    You can see edited examples of the markup for all of these content types in the previously-referenced code sample Loading and Writing Office Open XML. You can insert all of these content types using the same JavaScript code shown earlier (and provided in the referenced code samples) for inserting content at the active selection and writing content to a specified location using bindings.

    Before you explore the samples, let’s take a look at few tips for working with each of these content types.

    Important note Important

    Remember, if you are retaining any additional parts referenced in document.xml, you will need to retain document.xml.rels and the relationship definitions for the applicable parts you’re keeping, such as styles.xml or an image file.

    Working with styles

    The same approach to editing the markup that we looked at for the preceding example with directly-formatted text applies when using paragraph styles or table styles to format your content. However, the markup for working with paragraph styles is considerably simpler, so that is the example described here.

    Editing the markup for content using paragraph styles

    The following markup represents the body content for the styled text example shown in Figure 2.

    Copy
    <w:body>
      <w:p>
        <w:pPr>
          <w:pStyle w:val="Heading1"/>
        </w:pPr>
        <w:r>
          <w:t>This text is formatted using the Heading 1 paragraph style.</w:t>
        </w:r>
      </w:p>
    </w:body>
    
    NoteNote

    As you see, the markup for formatted text in document.xml is considerably simpler when you use a style, because the style contains all of the paragraph and font formatting that you otherwise need to reference individually. However, as explained earlier, you might want to use styles or direct formatting for different purposes: use direct formatting to specify the appearance of your text regardless of the formatting in the user’s document; use a paragraph style (particularly a built-in paragraph style name, such as Heading 1 shown here) to have the text formatting automatically coordinate with the user’s document.

    Use of a style is a good example of how important it is to read and understand the markup for the content you’re inserting, because it’s not explicit that another document part is referenced here. If you include the style definition in this markup and don’t include the styles.xml part, the style information in document.xml will be ignored regardless of whether or not that style is in use in the user’s document.

    However, if you take a look at the styles.xml part, you’ll see that only a small portion of this long piece of markup is required when editing markup for use in your app:

    • The styles.xml part includes several namespaces by default. If you are only retaining the required style information for your content, in most cases you only need to keep the xmlns:w namespace.

    • The w:docDefaults tag content that falls at the top of the styles part will be ignored when your markup is inserted via the app and can be removed.

    • The largest piece of markup in a styles.xml part is for the w:latentStyles tag that appears after docDefaults, which provides information (such as appearance attributes for the Styles pane and Styles gallery) for every available style. This information is also ignored when inserting content via your app and so it can be removed.

    • Following the latent styles information, you see a definition for each style in use in the document from which you’re markup was generated. This includes some default styles that are in use when you create a new document and may not be relevant to your content. You can delete the definitions for any styles that aren’t used by your content.

    NoteNote

    Each built-in heading style has an associated Char style that is a character style version of the same heading format. Unless you’ve applied the heading style as a character style, you can remove it. If the style is used as a character style, it appears in document.xml in a run properties tag (w:rPr) rather than a paragraph properties (w:pPr) tag. This should only be the case if you’ve applied the style to just part of a paragraph, but it can occur inadvertently if the style was incorrectly applied.

    • If you’re using a built-in style for your content, you don’t have to include a full definition. You only must include the style name, style ID, and at least one formatting attribute in order for the coerced OOXML to apply the style to your content upon insertion.

      However, it’s a best practice to include a complete style definition (even if it’s the default for built-in styles). If a style is already in use in the destination document, your content will take on the resident definition for the style, regardless of what you include in styles.xml. If the style isn’t yet in use in the destination document, your content will use the style definition you provide in the markup.

    So, for example, the only content we needed to retain from the styles.xml part for the sample text shown in Figure 2, which is formatted using Heading 1 style, is the following.

    NoteNote

    A complete Word 2013 definition for the Heading 1 style has been retained in this example.

    Copy
    <pkg:part pkg:name="/word/styles.xml" pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.styles+xml">
      <pkg:xmlData>
        <w:styles xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" >
          <w:style w:type="paragraph" w:styleId="Heading1">
            <w:name w:val="heading 1"/>
            <w:basedOn w:val="Normal"/>
            <w:next w:val="Normal"/>
            <w:link w:val="Heading1Char"/>
            <w:uiPriority w:val="9"/>
            <w:qFormat/>
            <w:pPr>
              <w:keepNext/>
              <w:keepLines/>
              <w:spacing w:before="240" w:after="0" w:line="259" w:lineRule="auto"/>
              <w:outlineLvl w:val="0"/>
            </w:pPr>
            <w:rPr>
              <w:rFonts w:asciiTheme="majorHAnsi" w:eastAsiaTheme="majorEastAsia" w:hAnsiTheme="majorHAnsi" w:cstheme="majorBidi"/>
              <w:color w:val="2E74B5" w:themeColor="accent1" w:themeShade="BF"/>
              <w:sz w:val="32"/>
              <w:szCs w:val="32"/>
            </w:rPr>
          </w:style>
        </w:styles>
      </pkg:xmlData>
    </pkg:part>
    

    Editing the markup for content using table styles

    When your content uses a table style, you need the same relative part of styles.xml as described for working with paragraph styles. That is, you only need to retain the information for the style you’re using in your content—and you must include the name, ID, and at least one formatting attribute—but are better off including a complete style definition to address all potential user scenarios.

    However, when you look at the markup both for your table in document.xml and for your table style definition in styles.xml, you see enormously more markup than when working with paragraph styles.

    • In document.xml, formatting is applied by cell even if it’s included in a style. Using a table style won’t reduce the volume of markup. The benefit of using table styles for the content is for easy updating and easily coordinating the look of multiple tables.

    • In styles.xml, you’ll see a substantial amount of markup for a single table style as well, because table styles include several types of possible formatting attributes for each of several table areas, such as the entire table, heading rows, odd and even banded rows and columns (separately), the first column, etc.

    Working with images

    The markup for an image includes a reference to at least one part that includes the binary data to describe your image. For a complex image, this can be hundreds of pages of markup and you can’t edit it. Since you don’t ever have to touch the binary part(s), you can simply collapse it if you’re using a structured editor such as Visual Studio 2012, so that you can still easily review and edit the rest of the package.

    If you check out the example markup for the simple image shown earlier in Figure 3, available in the previously-referenced code sample Loading and Writing Office Open XML, you’ll see that the markup for the image in document.xml includes size and position information as well as a relationship reference to the part that contains the binary image data. That reference is included in the <a:blip> tag, as follows:

    Copy
    <a:blip r:embed="rId4" cstate="print">
    

    Be aware that, because a relationship reference is explicitly used (r:embed=”rID4″) and that related part is required in order to render the image, if you don’t include the binary data in your Office Open XML package, you will get an error. This is different from styles.xml, explained previously, which won’t throw an error if omitted since the relationship is not explicitly referenced and the relationship is to a part that provides attributes to the content (formatting) rather than being part of the content itself.

    NoteNote

    When you review the markup, notice the additional namespaces used in the a:blip tag. You’ll see in document.xml that the xlmns:a namespace (the main drawingML namespace) is dynamically placed at the beginning of the use of drawingML references rather than at the top of the document.xml part. However, the relationships namespace (r) must be retained where it appears at the start of document.xml. Check your picture markup for additional namespace requirements. Remember that you don’t have to memorize which types of content require what namespaces—you can easily tell by reviewing the prefixes of the tags throughout document.xml.

    Understanding additional image parts and formatting

    When you use some Office picture formatting effects on your image—such as for the image shown in Figure 4, which uses adjusted brightness and contrast settings (in addition to picture styling)—a second binary data part for an HD format copy of the image data may be required. This additional HD format is required for formatting considered a layering effect, and the reference to it appears in document.xml similar to the following:

    Copy
    <a14:imgLayer r:embed="rId5">
    

    See the required markup for the formatted image shown in Figure 4 (which uses layering effects among others) in the Loading and Writing Office Open XML code sample.

    Working with SmartArt diagrams

    A SmartArt diagram has four associated parts, but only two are always required. You can examine an example of SmartArt markup in the Loading and Writing Office Open XML code sample. First, take a look at a brief description of each of the parts and why they are or are not required:

    Note Note

    If your content includes more than one diagram, they will be numbered consecutively, replacing the 1 in the file names listed here.

    • layout1.xml: This part is required. It includes the markup definition for the layout appearance and functionality.

    • data1.xml: This part is required. It includes the data in use in your instance of the diagram.

    • drawing1.xml: This part is not always required but if you apply custom formatting to elements in your instance of a diagram—such as directly formatting individual shapes—you might need to retain it.

    • colors1.xml: This part is not required. It includes color style information, but the colors of your diagram will coordinate by default with the colors of the active formatting theme in the destination document, based on the SmartArt color style you apply from the SmartArt Tools design tab in Word before saving out your Office Open XML markup.

    • quickStyles1.xml: This part is not required. Similar to the colors part, you can remove this as your diagram will take on the definition of the applied SmartArt style that’s available in the destination document (that is, it will automatically coordinate with the formatting theme in the destination document).

    Tip Tip

    The SmartArt layout1.xml file is a good example of places you may be able to further trim your markup but might not be worth the extra time to do so (because it removes such a small amount of markup relative to the entire package). If you would like to get rid of every last line you can of markup, you can delete the <dgm:sampData…> tag and its contents. This sample data defines how the thumbnail preview for the diagram will appear in the SmartArt styles galleries. However, if it’s omitted, default sample data is used.

    Be aware that the markup for a SmartArt diagram in document.xml contains relationship ID references to the layout, data, colors, and quick styles parts. You can delete the references in document.xml to the colors and styles parts when you delete those parts and their relationship definitions (and it’s certainly a best practice to do so, since you’re deleting those relationships), but you won’t get an error if you leave them, since they aren’t required for your diagram to be inserted into a document. Find these references in document.xml in the dgm:relIds tag. Regardless of whether or not you take this step, retain the relationship ID references for the required layout and data parts.

    Working with charts

    Similar to SmartArt diagrams, charts contain several additional parts. However, the setup for charts is a bit different from SmartArt, in that a chart has its own relationship file. Following is a description of required and removable document parts for a chart:

    Note Note

    As with SmartArt diagrams, if your content includes more than one chart, they will be numbered consecutively, replacing the 1 in the file names listed here.

    • In document.xml.rels, you’ll see a reference to the required part that contains the data that describes the chart (chart1.xml).

    • You also see a separate relationship file for each chart in your Office Open XML package, such as chart1.xml.rels.

      There are three files referenced in chart1.xml.rels, but only one is required. These include the binary Excel workbook data (required) and the color and style parts (colors1.xml and styles1.xml) that you can remove.

    Charts that you can create and edit natively in Word 2013 are Excel 2013 charts, and their data is maintained on an Excel worksheet that’s embedded as binary data in your Office Open XML package. Like the binary data parts for images, this Excel binary data is required, but there’s nothing to edit in this part. So you can just collapse the part in the editor to avoid having to manually scroll through it all to examine the rest of your Office Open XML package.

    However, similar to SmartArt, you can delete the colors and styles parts. If you’ve used the chart styles and color styles available in to format your chart, the chart will take on the applicable formatting automatically when it is inserted into the destination document.

    See the edited markup for the example chart shown in Figure 11 in the Loading and Writing Office Open XML code sample.

    You’ve already seen how to identify and edit the content in your markup. If the task still seems difficult when you take a look at the massive Open XML package generated for your document, following is a quick summary of recommended steps to help you edit that package down quickly:

    Note Note

    Remember that you can use all .rels parts in the package as a map to quickly check for document parts that you can remove.

    1. Open the flattened XML file in Visual Studio 2012 and press Ctrl+K, Ctrl+D to format the file. Then use the collapse/expand buttons on the left to collapse the parts you know you need to remove. You might also want to collapse long parts you need, but know you won’t need to edit (such as the base64 binary data for an image file), making the markup faster and easier to visually scan.

    2. There are several parts of the document package that you can almost always remove when you are preparing Open XML markup for use in your app. You might want to start by removing these (and their associated relationship definitions), which will greatly reduce the package right away. These include the theme1, fontTable, settings, webSettings, thumbnail, both the core and app properties files, and any taskpane or webExtension parts.

    3. Remove any parts that don’t relate to your content, such as footnotes, headers, or footers that you don’t require. Again, remember to also delete their associated relationships.

    4. Review the document.xml.rels part to see if any files referenced in that part are required for your content, such as an image file, the styles part, or SmartArt diagram parts. Delete the relationships for any parts your content doesn’t require and confirm that you have also deleted the associated part. If your content doesn’t require any of the document parts referenced in document.xml.rels, you can delete that file also.

    5. If your content has an additional .rels part (such as chart#.xml.rels), review it to see if there are other parts referenced there that you can remove (such as quick styles for charts) and delete both the relationship from that file as well as the associated part.

    6. Edit document.xml to remove namespaces not referenced in the part, section properties if your content doesn’t include a section break, and any markup that’s not related to the content that you want to insert. If inserting shapes or text boxes, you might also want to remove extensive fallback markup.

    7. Edit any additional required parts where you know that you can remove substantial markup without affecting your content, such as the styles part.

    After you’ve taken the preceding seven steps, you’ve likely cut between about 90 and 100 percent of the markup you can remove, depending on your content. In most cases, this is likely to be as far as you want to trim.

    Regardless of whether you leave it here or choose to delve further into your content to find every last line of markup you can cut, remember that you can use the previously-referenced code sample Get, Set, and Edit Office Open XML as a scratch pad to quickly and easily test your edited markup.

    Tip Tip

    If you update an OOXML snippet in an existing solution while developing, clear temporary Internet files before you run the solution again to update the Open XML used by your code. Markup that’s included in your solution in XML files is cached on your computer.

    You can, of course, clear temporary Internet files from your default web browser. To access Internet options and delete these settings from inside Visual Studio 2012, on the Debug menu, choose Options and Settings. Then, under Environment, choose Web Browser and then choose Internet Explorer Options.

    In this topic, you’ve seen several examples of what you can do with Open XML in your apps for . We’ve looked at a wide range of rich content type examples that you can insert into documents by using the OOXML coercion type, together with the JavaScript methods for inserting that content at the selection or to a specified (bound) location.

    So, what else do you need to know if you’re creating your app both for stand-alone use (that is, inserted from the Store or a proprietary server location) and for use in a pre-created template that’s designed to work with your app? The answer might be that you already know all you need.

    The markup for a given content type and methods for inserting it are the same whether your app is designed to stand-alone or work with a template. If you are using templates designed to work with your app, just be sure that your JavaScript includes callbacks that account for scenarios where referenced content might already exist in the document (such as demonstrated in the binding example shown in the section Add and bind to a named content control).

    When using templates with your app—whether the app will be resident in the template at the time that the user created the document or the app will be inserting a template—you might also want to incorporate other elements of the API to help you create a more robust, interactive experience. For example, you may want to include identifying data in a customXML part that you can use to determine the template type in order to provide template-specific options to the user. To learn more about how to work with customXML in your apps, see the additional resources that follow.

    For more information, see the following resources:

    Changing Site Access Request Email in SharePoint 2013 (Office 365)

    The option to set the for any SharePoint site access requests has moved around in the last few versions, so I thought I’d post this for those searching through old posts looking for one about SharePoint 2013.

    This setting determines who will receive an email when a user requests access to a particular site—usually when the user tries to access the site and is denied. The tricky part is that the for this request is not related to the site owner permissions, it’s just a string.

    To find the setting, navigate to:

    Site (Gear icon on top-right) > Site permissions > Access Request Settings (in the ribbon)

    First use the gear icon in the top-right corner to get to the Site Settings page. If you don’t see the Site Settings link, you probably don’t have sufficient rights to make this change.

    image

    Once there, click on Site permissions.

    SNAGHTMLf46205b

    This will open the “Permissions: <site name>” page where you can access the Access Request option from the ribbon at the top of the screen. The option you’re changing is “Send all access requests to the following e-mail address.”

    SNAGHTMLf4595d8

    Simply the email address you’d like to use for access requests and you’re done.

    Duet Enterprise and NetWeaver – Feautures of and Benefits for businesses by integrating SAP with SharePoint

    ImageImage

    For years organisations have been scratching their heads trying to figure out how to provide collaboration capabilities on data held within SAP systems.

    Many have developed custom solutions and achieved mixed results. We have also seen the rise of Duet 1.x from Microsoft and SAP that provided 11 specific solutions that surfaced data residing in SAP via Microsoft Office. These were good solutions but limited due to their lack of extensibility.

    Hence the excitement over Duet Enterprise and the benefits that have been realised are exactly what everyone has been looking for. InfoSys Technologies have just released a case study with Microsoft that discusses their Duet Enterprise project and it is an interesting read. Particularly the benefits realised:

    1. Minimal Development Effort
    2. Higher Adoption

    3. Enhanced Productivity

    How can these benefits be real?

    Well, to start with, Duet Enterprise provides all the relevant plumbing to blend both SAP and SharePoint “Platforms”. SAP and Microsoft position Duet Enterprise as the “Foundation” layer between the two platforms.

    DE Architecture

    Architecturally speaking the Duet Enterprise Add-on’s must be installed on SAP Netweaver 7.02 Servers and the SharePoint 2010 Farm. The SAP Netweaver 7.02 Servers work as the gateway to the SAP backend systems and can actually support any version of SAP system. The Duet Enterprise is built on SharePoint 2010 and makes use of the Business Connectivity Services that enables the full Create Read Update Delete (CRUD) operations to data residing on external systems. The options for user experience are the same for any SharePoint 2010 solution; Mobile, Office or a Browser. There are no SAP or Duet Enterprise clients.

    Ok, but what does this do?

    This provides organisations with a jointly supported, (Microsoft and SAP) technical approach and OOB tools to address the common challenges faced when integrating SAP systems. Such as;

    Authentication: Using Claims-Based authentication to ensure SSO is available between SharePoint 2010 sites and SAP backend systems.

    Authorisation: Able to secure objects in SharePoint using SAP Roles, supporting concepts such a Manager seeing more than an employee.

    Monitoring: Duet Enterprise SharePoint Health Rules are provided to proactively monitor your Duet Enterprise solutions. Also available are Duet Enterprise Management Pack for System Centre Operations Manager, a quick video here shows it in action.

    It is also worth taking a look at the Duet Enterprise Architecture for more information.

    These are just some of the behind the scenes aspects of Duet Enterprise that essentially provide a jump start in any SAP/SharePoint integration project. However, there is also another layer that exemplifies how data from SAP can be surfaced through SharePoint 2010 and Office 2010. (Office 2010 isn’t a pre-req but certainly a better experience!).

    Currently included are;

    1. Duet Enterprise Workflow
  • Duet Enterprise Profile

  • Duet Enterprise Collaboration

  • Duet Enterprise Sites

  • Duet Enterprise Reporting

  • All of this provide a way to jump start development of a solution that can be used OOB or easily extended to suit specific requirements. The result here is that organisations are able to surface data typically locked away in backend SAP systems to a much wider audience and in an inexpensive way. I may blog on what each of these are in the future and how to extend them.

    Once deployed, IT departments have a “Foundation” in place to support future extensibility. Once users start seeing what is possible I fully expect the flood gates to open with requests for composite applications.

    What else do you get?

    1. Long-term product roadmap commitment from SAP and Microsoft
    2. Platform for innovation: broad ecosystem of ISV and service partners offering Business Pack Solutions
    3. Partner solutions/apps certified by SAP

    How To : Get data from Windows Azure Marketplace into your Office application

    ImageThis post walks through a published app for Office, along the way showing you everything you need to get started building your own app for Office that uses a data service from the Windows Azure Marketplace.

    Ever wondered how to get premium, curated data from Windows Azure Marketplace, into your Office applications, to create a rich and powerful experience for your users? If you have, you are in luck.

    Introducing the first ever app for Office that builds this integration with the Windows Azure Marketplace – US Crime Stats. This app enables users to insert crime statistics, provided by DATA.GOV, right into an Excel spreadsheet, without ever having to leave the Office client.

    One challenge faced by Excel users is finding the right set of data, and apps for Office provides a great opportunity to create rich, immersive experiences by connecting to premium data sources from the Windows Azure Marketplace.

    What is the Windows Azure Marketplace?

    The Windows Azure Marketplace (also called Windows Azure Marketplace DataMarket or just DataMarket) is a marketplace for datasets, data services and complete applications. Learn more about Windows Azure Marketplace.

    This blog article is organized into two sections:

    1. The U.S. Crime Stats Experience
    2. Writing your own Office Application that gets data from the Windows Azure Marketplace

    The US Crime Stats Experience

    You can find the app on the Office Store. Once you add the US Crime Stats app to your collection, you can go to Excel 2013, and add the US Crime Stats app to your spreadsheet.

    Figure 1. Open Excel 2013 spreadsheet

    blog_CrimeStats_fig01

    Once you choose US Crime Stats, the application is shown in the right pane. You can search for crime statistics based on City, State, and Year.

    Figure 2. US Crime Stats app is shown in the right task pane

    blog_CrimeStats_fig02

    Once you enter the city, state, and year, click ‘Insert Crime Data’ and the data will be inserted into your spreadsheet.

    Figure 3. Data is inserted into an Excel 2013 spreadsheet

    blog_CrimeStats_fig03

    What is going on under the hood?

    In short, when the ‘Insert Crime Data’ button is chosen, the application takes the input (city, state, and year) and makes a request to the DataMarket services endpoint for DATA.GOV in the form of an OData Call. When the response is received, it is then parsed, and inserted into the spreadsheet using the JavaScript API for Office.

    Writing your own Office application that gets data from the Windows Azure Marketplace

    Prerequisites for writing Office applications that get data from Windows Azure Marketplace

    How to write Office applications using data from Windows Azure Marketplace

    The MSDN article, Create a Marketplace application, covers everything necessary for creating a Marketplace application, but below are the steps in order.

    1. Register with the Windows Azure Marketplace:
      • You need to register your application first on the Windows Azure Marketplace Application Registration page. Instructions on how to register your application for the Windows Azure Marketplace are found in the MSDN topic, Register your Marketplace Application.
    2. Authentication:
    3. Receiving Data from the Windows Azure Marketplace DataMarket service

    New “Filter My ListView” SharePoint Web Part and App now available for SP 2010 & 2013 On-premise and Office 365!!

    What is it?

    The “Filter My ListView” Web Part / App is a SharePoint WebPart enables you to create custom filter to find information in SharePoint list or document library.

    my listview

    Why do you need it?

    In working with SharePoint and with large lists or document libraries containing 100K+ items, users frequently found that there is no usable tool for filtering data.

    SharePoint let us create views, but their functionality doesn’t meet the requirements of users. And most popular reason is this: list view is static and users can’t modify it on the fly.

    On the other hand the “Filter My List” web part may filter data representing in the current view’s columns. But user can’t apply multiple filter to list and others (date range, filter criteria, …).

    All this leads to the fact that we have to have custom solution this solving these limitations.

    Usage

    The “Filter My ListView” Web Part / App is a simple to use SharePoint list view filter. It enables your to create custom filter form, composed from all list fields (not only fields containing in current list view).

    Supported field types

    • Simple text

    jQuery UI is used for using autocomplete!

    • Text with options enables select filtering type

    Text with filtering options

    • Date

    • DateRange

    • Boolean

    • DropDown list represents unique values of field

    • User or Group
    • Taxonomy Term Picker

    • Multi-select CheckBoxList

    The “Filter My ListView” Web Part / App builds a filter form using different types of controls:

    • TextBox. “Contains” criteria filter
    • TextBox with autocomplete
    • TextBox with options. Allows user to choose filter criteria that can be one of these:
      • Equals
      • Not equals
      • Contains
      • Begins with
    • Date
    • Date Range
    • DropDownList
    • DropDownList with multiple selection
    • People picker
    • MetaData picker

    Relation between field type and supported filter types is represented in this matrix:

    Contact me now through my blog, https://sharepointsamurai.wordpress.com or at tomas.floyd@outlook.com for this and more SharePoint and Office 365 custom developed Web Parts and Apps

    Tip: Bypass WebProxy for BCS service application in Duet Enterprise landscape

    Setting up a fresh Duet Enterprise landscape, I was confronted with an issue trying to import BDC Models from the SAP Gateway system into SharePoint BCS:
    Application definition import failed. The following error occurred: Error loading url: “http://&#8230;.”. This normally happens when url does not point to a valid discovery document, or XSD schema.
    Using Fiddler I detected that the problem cause is a “(407) Proxy Authentication Required” issue: “The ISA server requires authorization to fulfill the request. Access to proxy filter is denied.” Although I did setup a rule in Windows CredentialsManager for automatic authentication against the web proxy, this is not picked up in the context of BCS service application as an autonomous running process. As it turns out, by default .NET web applications and services will attempt to use a proxy, even if it doesn’t need one.
    So how then to resolve from this situation? Multiple approaches are possible here:

    1. Explicitly set the Proxy Credentials for the BCS application process. It is not possible to set the proxy credentials direct in the web.config of 14hive\webservices\bdc. Instead you must use a 2-step delegation approach: refer in the web.config to a custom Proxy module implementation, and build the custom Proxy to explicitly set the proxy credentials:
      namespace ByPassProxyAuthentication
      {
          public class ByPassProxy : IWebProxy
          {
              public ICredentials Credentials
              {
                  get { 
                      return new NetworkCredential(
                          "username", "password", "domain"); }
                  set { }
              }
          }
      }
      
          <defaultProxy enabled="true" useDefaultCredentials="false">
    2. Disable usage of (default)proxy altogether for the BCS application process. This is a viable approach in case the consumed external systems are all within the internal company network infra.
      <system.net>  
        <defaultProxy  
          enabled="false"  
          useDefaultCredentials="false"/>  
        </system.net>
    3. Disable usage of (default)proxy for specific addresses for the BCS application process.
      <system.net>
          <defaultProxy>
              <bypasslist>
                  <add address="[a-z]+\.contoso\.com" />
                  <add address="192\.168\..*" />
                  <add address="Netbios name of server" />
              </bypasslist>
          </defaultProxy>
      </system.net>

      The first bypasses the proxy for all servers in the contoso.com domain; the second bypasses the proxy for all servers whose IP addresses begin with 192.168. The third bypass entry is for the ServerName

    4. Disable usage of proxy for specific address on system level. This is in fact the most simple approach, just disable proxy usage for certain url’s for all processes on system level. That is also the potential disadvantage, it can be that it is not allowed to disable proxy usage for all processes. You disable the proxy via IE \ Internet Options \ Connections \ LAN Settings \ Advanced \ Proxy Server \ Exception <Do not use proxy server for addresses beginning with>.

    One design to rule them all – Responsive design

    In the last few years, the use of mobile devices to surf the web has increased significantly.

    According to some researchers, by 2015, more mobile devices than desktop computers will be used to access the web. Mobile devices come in different sizes and capabilities. And since the desktop experience just isn’t good for mobile users, what are the options for improving the experience of mobile users on your website?

    Improving the user experience across devices

    Optimizing the user experience of a website across different devices is a complex process. Not only do you have to take into account the screen resolution of each device, but you also need to consider its capabilities (such as touch or pointer-based) and device size (1024×768 might be clearly readable for everyone on a 20-inch monitor but might result in a very bad experience on a 5-inch screen).

    When you are planning improvements to the user experience of your website on mobile devices, there is no silver bullet approach. You have to research who your users are, which devices they use, and what they are trying to achieve on your website.

    You also have to have clear goals for what purpose your website serves and how it should guide your visitors in the process of becoming your customers.

    You have several options for improving the user experience of a public-facing website. Which one you choose depends on the different factors that apply to your scenario.

    Mobile websites

    In the past, when the web technology wasn’t as sophisticated as it is nowadays, it was a common practice to provide mobile users with a separate mobile website to optimize their experience. Being hosted on a separated URL, such as http://m.contoso.com, the mobile site would have a user experience optimized for mobile devices.

    In some scenarios, organizations would even go a step further and optimize the copy on the mobile website. When a user navigated to the website using a mobile device, the main website would detect the use of a mobile device and automatically redirect the visitor to the mobile version.

    It’s not that hard to imagine that building and maintaining two different sites is not only costly but also time consuming. Every update has to be done separately. Even then, with the diversity of today’s mobile devices, the questions remain whether a single mobile website would suffice and whether you wouldn’t need more websites to reach the whole spectrum of your customers.

    Being able to reuse the content across the main and mobile websites simplifies the content management process. But the need to separately maintain the functionality of both websites makes it hard to justify this approach in most scenarios.

    Mobile apps

    One of the recent developments of the mobile market is the increased popularity of companion apps. By using the native capabilities of specific devices, you can build rich mobile apps to support different use cases. There is no better user experience on a mobile device than the native one offered by the device itself. For more information, see Build mobile apps for SharePoint 2013.

    But is it realistic to build separate apps for all the different scenarios and for all the different mobile devices used to navigate the website? Although mobile apps are of great value for supporting specific use cases, there is still the need to access the information on the website from a mobile device in a user-friendly way.

    Responsive web design

    Instead of building separate mobile sites for mobile devices, what if we could have one website that automatically adapts itself to the particular device?

    Responsive web design is a concept based on the ability to separate the design from the content on a website. Using the CSS media queries capability implemented in all modern browsers, and based on the screen dimensions of the specific device, you can load different style sheets to ensure that the website is presented in a user-friendly manner. And because CSS has its limitations, you can use JavaScript to further optimize the interface and interaction of a website on mobile devices. For more information, see Implementing your responsive designs on SharePoint 2013.

    From the search engine optimization (SEO) perspective, responsive web design is the recommended way to optimize public-facing websites for mobile devices. After all, since the same HTML is sent to every device, it’s sufficient for an Internet search engine to index the content once, and it can be sure that the search results will apply the search query on every device.

    Implementing responsive web design on a public-facing website is relatively easy assuming you start planning for it from the beginning. The great advantage of responsive web design above other approaches is that you maintain your website once to support a variety of audiences, and the different experiences are future-proof as they depend on the devices’ dimensions rather than their identity.

    The following figures show how the sample Contoso Electronics website is displayed on different devices using responsive web design. Figure 1 shows the screen shot taken on a desktop device.

    Figure 1. The Contoso Electronics website displayed on a desktop deviceFigure 1. The Contoso Electronics website displayed on a desktop device

    Figure 2 shows how the Contoso Electronics website looks like on different mobile devices.

    Figure 2. The Contoso Electronics website displayed on mobile devicesFigure 2. The Contoso Electronics website displayed on mobile devices

    SharePoint 2013 device channels

    One of the new capabilities of SharePoint 2013 is device channels. You can use device channels to optimize how a website is displayed on different devices. By defining different channels and associating different devices with them, you can use different master pages to optimize how the website is presented to the user.

    Figure 5 shows a sample configuration of device channels for a public-facing website built with SharePoint 2013.

    Figure 3. Device channels configured for a public-facing website built on SharePoint 2013Figure 3. Device channels configured for a public-facing website built on SharePoint 2013

    Whereas responsive web design uses a device’s screen size to determine the presentation layer, device channels in SharePoint 2013 use the identity of the browser on the particular device to decide which presentation style to use.

    Depending on how many different devices your site visitors use, managing the different devices and experiences can become complex. By using device channels, you get more flexibility in controlling the markup of your website for the different devices. Another benefit of using device channels is that you can serve different content to different devices, whereas the same content is served when using responsive web design. With device channels, you can apply additional optimizations to your website, such as resizing images and videos server-side using the renditions capability, which further improves the performance and user experience of your website. For more information, see How to: Manage image renditions in SharePoint 2013.

    With all the different options at our disposal, which one should we use to get the best results?

    Improving the user experience of a public-facing website in SharePoint 2013

    First and foremost, it’s important to note that SharePoint 2013 supports all the methods mentioned above for improving user experience on mobile devices.

    Whether you’re looking at building a separate website for mobile users, supporting certain use cases with a mobile app, implementing responsive web design, or using device channels, it can all be implemented in your website on top of SharePoint 2013.

    Not only does SharePoint 2013 not stand in your way, but it also supports you in implementing some of those improvements.

    For example, using the cross-site publishing capability, you can easily publish the centrally managed content on both the main and the mobile websites. With the Search REST API, you can have your content published in your mobile app, and if you’re looking at optimizing the presentation of your website across different devices, SharePoint 2013 offers plenty of features to help you.

    With all these techniques at your disposal, it is up to you to decide which method, or combination of methods, is the best choice for what you’re trying to achieve. While you might be interested in supporting a particular complex process with a dedicated mobile app, it might still be of added value to ensure that everyone, regardless of their device, can access all the information on your website.

    In most scenarios, it’s easy to choose whether or not the particular optimization technique offersadded value. A slightly more difficult choice, partly due to the similarity of both methods, is whether you should use responsive web design or device channels to optimize the presentation of your website for mobile devices.

    Responsive web design and device channels comparison

    Responsive web design and the SharePoint 2013 device channels capability are similar in how they let you optimize a single website to be displayed in a user-friendly way on different devices. Despite this similarity, there are a few important differences between both approaches. Table 1 presents a comparison of the different properties of both approaches.

    Device channels Responsive web design
    Device management Property management
    Different HTML for every channel Same HTML for every device
    More management (support for new devices) Future proof (device size)
    More flexibility Limited by CSS support and capabilities
    Custom Vary-By User Agent response header required Preferred by Internet search engines
    Table 1. Comparison of device channels and responsive web design

    Applying user experience

    First of all, there is a difference in how both approaches determine which user experience should be applied for the particular user. Responsive web design uses the size of the screen to determine how the content should be laid out in the browser’s window. Device channels, on the other hand, use the identity of the browser to load the suitable channel.

    While responsive web design can cause different experiences to be loaded depending on the size of the browser window, device channels will always load the same experience for the same device regardless of the browser window size. Using device channels can have great advantages, for example, from the troubleshooting point of view where the user and the helpdesk employee would see the same interface despite the possible differences in their screen resolutions or browser window sizes.

    Page markup

    Another difference between device channels and responsive web design is how the page contents are served. Responsive web design changes only the presentation layer of the website. Although you can hide some pieces of the page in the browser using CSS, they are still present in the website’s code and therefore loaded. When using device channels, you can use different master pages to ensure that only the relevant markup is served to users. Additionally, you can use the device channel panels to further control the content elements loaded on specific pages.

    Although device channels allow for better control of the rendered HTML and therefore optimized performance of the website, more effort is required to ensure that Internet search engines will properly deal with all the different versions of the website presented to different devices. You can achieve this by using the Vary-By User Agent response header, but it has to be done manually.

    Future-proofness

    Responsive web design uses the size of the browser window to distinguish between the different experiences. This is a robust approach, and the chances are low that a new device will appear on the market that has a poor user experience despite the configured breakpoints. One reason for that might be related to some specific capabilities of such devices, but again, chances of this are very rare.

    SharePoint 2013 device channels are based on the identity of the browser used to open the website. There are two challenges with this approach. First of all, in some situations it might be impossible to distinguish between the same browser installed on the same operating system but on two devices with distinct capabilities. Second, if a new device appears on the market, you would have to verify that this device is assigned to the right device channel on your website.

    Choosing the right approach for optimizing the user experience

    Although responsive web design and device channels are very similar, their capabilities differ and they have different impact when used for optimizing a website for mobile devices. Due to their similarities and their own strength, choosing between the two approaches is often difficult. Why not combine both approaches to get the best of what they offer?

    Combining responsive web design and device channels

    An interesting scenario worth considering is to combine responsive web design and SharePoint 2013 device channels to benefit from the strengths of both approaches.

    When combining responsive web design and device channels, you could use responsive web design to create the baseline cross-device experience. Depending on your design for the different breakpoints, using responsive web design could be good for the 80%, or maybe even 90%, of the optimizations. The remainder—whether they’re caused by how the web design changes between the breakpoints or by the capabilities of the different devices that should be supported—could be covered by device channels and device channel panels.

    By using responsive web design to build the baseline for the cross-browser experience, we benefit from its future-proofness and robustness. For the specific exceptions, we can benefit from the granular control that SharePoint device channels offer us.

    Great tool available for Responsive Web Pages in SharePoint!!

    Web designers are crucial for a successful SharePoint implementation. We all know that. With this in mind, I wanted to write an article for our SharePoint web designers out there. Not being an authority on the subject, I decided to ask someone who has been working in web design for some time. By asking my contacts, I got the email address of an expert in SharePoint branding and UX customization. Eric Overfield was the name on the contact card. I set up a conference call, and very soon we were chatting and discussing UX, branding, artists, engineers, and SharePoint.

    The conversation quickly turned to devices and how to make SharePoint work as well as possible in this new and changing set of displays. Eric’s answer was: responsive web design. Responsive web design allows us to look at a site like a fluid grid. The fluid, dynamic grid adapts itself to fit the information in display resolutions as different as those in a phone, a tablet, and a full desktop monitor. Keep in mind that the mix of display resolutions doubles if you consider landscape and portrait orientations available in all these devices.

    The author of the original post about responsive web design, Ethan Marcotte, provided a reference site to demo the concepts explained in his post. In this demo, you can observe how the elements in the page rearrange themselves to fit the current resolution as you resize your browser window. The demo left me wondering how a SharePoint website would react to different resolutions by using the fluid grid characteristic of responsive frameworks. Fortunately, Eric, along with some other people, developed Responsive SharePoint. Responsive SharePoint is a CodePlex project that you can use to try responsive frameworks on your SharePoint website.

    I followed the provided instructions to install the resources by using Design Manager on an out-of-the-box publishing site. In no time, I was looking at how the site dynamically reacted to different resolutions as I resized my browser window. I decided to test the project by using the following display resolutions:

    • 1200×1900 (desktop, portrait orientation)
    • 768×1366 (tablet, portrait orientation)
    • 480×800 (smartphone, portrait orientation)

    The results were amazing. Within 10 minutes, I had a SharePoint website that automatically adapts to display resolutions commonly used in devices. The following figure compares the website in commonly used display resolutions:

    Figure 1. Comparison of resolutions of the SharePoint website using a responsive frameworkFigure 1. Comparison of resolutions of the SharePoint website using a responsive framework

    How is this achieved?

    In this post, I can only explain that Responsive SharePoint uses media queries to match the width of the display in the device and then applies a set of styles to present the content in the available space. For this to work, you need a browser that supports media queries. The latest version of the major browsers support such functionality. The following code example shows how to declare media queries:

    @media (min-width: 769px) and (max-width: 979px) {
        /*
            Styles for display width 
            between 769 and 979 pixels
        */
    }
    
    @media (max-width: 768px) {
        /*
            Styles for display width 
            equal to 768 pixels and thinner
        */
    }
    
    @media (min-width: 1200px) {
        /*
            Styles for display width 
            equal to 1200 pixels and wider
        */
    }

    Of course there is much more to it. You can learn more by browsing the Responsive SharePoint CodePlex project.

    The new design and branding features in SharePoint 2013 make it easy to create and edit your web design, including responsive designs. You can even use the tools you are familiar with by mapping a network drive to the SharePoint 2013 Master Page Gallery. In my case, I used Microsoft Expression Web 4 to browse and edit the master pages and CSS files.

    Free integration guide -Microsoft Dynamics CRM Online and Office 365

    Combining the online services of Office 365 with Microsoft Dynamics CRM Online empowers your teams to work where and when they want with best-of-class cloud services.

    This guide is intended for Microsoft Dynamics CRM administrators and technical decision makers interested in exploring Office 365 services and how they integrate with Microsoft Dynamics CRM Online. Integration with Office 365 becomes increasingly relevant to

    Microsoft Dynamics CRM Online users as management of Microsoft Dynamics CRM Online shifts to the Microsoft online services environment.

    For a .pdf version of this document: Integration Guide: Microsoft Dynamics CRM Online and Office 365 please visit – http://download.microsoft.com/download/D/4/F/D4F5A3C3-E3CB-48C9-85DE-4ED0B7FFBD60/CRMO365Integration.pdf

    Some of what this paper covers:

    • Add an Office 365 trial subscription to Microsoft Dynamics CRM Online
    • Set up CRM Online to use Exchange Online
    • Set up CRM Online to use SharePoint Online
    • Set up CRM Online to use Lync Online
    • Set up CRM Online to  use Yammer

    New Office 365 Tool available to help you re-design for the App Model

    Learn about a tool that analyzes your SharePoint full-trust code solutions and Office add-ins and macros to help you redesign them for the app model. Security is important to us—your code remains private while using the tool.

    The app model is a great tool that fully embraces the benefits of moving to the cloud, but migrating to the model can be a time-consuming task. SharePoint is a complex enterprise-level collaboration system, and custom solutions built on top of the SharePoint platform using full trust code don’t easily map to a cloud-based deployment. Similarly, Office client solutions – managed add-ins and VBA macros built on individual client object models – are widely deployed on desktops and need to be ported to work in the cloud. We understand that creating these solutions required a significant investment. We want to help you translate these solutions to cloud-friendly apps as painlessly as possible.

    The SharePoint and VBA Code Analyzer—a tool to help you understand how you can refactor your SharePoint and Office client solutions to Office 365. Working with Mobilize.net, one of our long-standing partners, we’ve created a web portal where you can upload your SharePoint and Office client solutions and get a complete analysis of the existing code. We’ll provide guidance and recommendations on what level of effort needs to be invested to move them to the cloud, so you can start refactoring your custom business solutions as soon as possible.

    “But, wait!” you think. “I can’t send my company’s code where external parties look at it!” No worries—we have put several security measures in place to prevent unauthorized access, and the code runs through a completely black-box process. The analysis is done with automated tools which only collect metadata about files, lines of code, ASP.NET application pages, web parts, libraries, workflows, and other platform-dependent objects. We then use this data to generate reports on how you can map your existing code to the new model.

    The tool is also hosted behind a digital certificate-enabled site, which ensures that everything that goes across the wire to our black box process is encrypted.

    Learn about the all new Office Web Widgets

    Client controls, such as the Office Web Widgets – Experimental, can greatly reduce the amount of time required to build apps, and at the same time, increase the quality of the apps. For this to be true, we have to be sure the widgets meet certain criteria:         

                 

    •              

      Widgets must be designed to be used in any webpage, even if the page is not hosted on SharePoint.

                 

    •            

    •              

      Widgets work within the Office controls runtime. This lets us to provide a common set of requirements and a consistent syntax to use the widgets.

                 

    •            

    •              

      Widgets that communicate back to SharePoint use the cross-domain library. The widgets don’t have a dependency on a particular server-side platform or technology. You can use the widgets regardless of your choice of server technology.

                 

    •            

    •              

      Widgets must coexist with other elements in the page. The inclusion of the widget to a page should not modify other elements in it.

                 

    •            

    •              

      Play nice with existing frameworks. We want to be sure you can still use the tools and technologies that you are used to.

                 

    •          

             

    Figure 1. An app using Office Web Widgets – Experimental

             
              Office Web Widgets - Experimental demo          

    You can use the widgets by installing the Office Web Widgets – Experimental NuGet package from Visual Studio For more information, see Managing NuGet Packages Using the Dialog. You can also browse the NuGet gallery page.

             

    Your feedback and comments helped us decide what widgets to provide. As you can see in Figure 1, the (1) People Picker and (2) Desktop List View widgets are ready for you to try and experiment. Please keep the feedback coming at the Office Developer Platform UserVoice site

             

    You can also see the widgets in action in the Office Web Widgets – Experimental Demo code sample.

           

                                     

                       

               

                                                  People Picker widget                            

                                 

               

             

             

                                                     You can use the experimental People Picker widget in apps to help your users find and select people and groups in a tenant. Users can start typing in the text box and the widget retrieves the people whose name or e-mail matches the text.

               

    Figure 2. People Picker widget solving a query

               
                People Picker experimental control on a page            

    You can declare the widget in the HTML markup or programmatically using JavaScript. In either case, you use a div element as a placeholder for the widget. You can also set properties and event handlers for the People Picker widget. The following table shows the available properties and events in the People Picker widget.

               

                             

               

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             

                       

    Property/Event

                     

                       

    Type

                     

                       

    Description

                     

                       

                          objectType                    

                     

                       

    JSON Object (list of strings)

                     

                       

    Type of items the widget will resolve. Options:

                       

                           

    •                        

      User

                           

    •                      

    •                        

      Group

                           

    •                    

                       

    Default to user only.

                     

                       

                          allowMultipleSelections                    

                     

                       

    Boolean

                     

                       

    True/False. If False, the widget should allow selecting only one item at the time. Default=False.

                     

                       

                          rootGroupName                    

                     

                       

    string

                     

                       

    If provided, the widget will limit the selection to items in this group. If not provided, the widget will query objects from the whole tenancy.

                     

                       

                          selectedItems                    

                     

                       

    JSON array

                     

                       

    List of items selected. Each item will return an object representing a user or group.

                     

                       

                          onAdded                    

                     

                       

    Function

                     

                       

    Event that fires when a new object is added to the selection. The handler function received the object added.

                     

                       

                          onRemoved                    

                     

                       

    Function

                     

                       

    Event that fires when a new object is removed from the selection. The handler function received the object removed.

                     

                       

                          onChange                    

                     

                       

    Function

                     

                       

    Either adding or removing objects triggers this event. No parameters are passed to the handler function.

                     

                       

                          validationErrors                    

                     

                       

    Array

                     

                       

    Array of possible validation errors:

                       

                           

    •                        

      empty

                           

    •                      

    •                        

      unresolvedItem

                           

    •                      

    •                        

      tooManyItems

                           

    •                    

                     

                       

                          autoShowValidationMessage                    

                     

                       

    Boolean

                     

                       

    True=Show False=Don’t show

                     

                       

                          hasErrors                    

                     

                       

    Boolean

                     

                       

    True= There are 1 or more validation errors False=There are no validation errors

                     

                       

                          errors                    

                     

                       

    Array

                     

                       

    Array of possible validation errors:

                       

                           

    •                        

      empty

                           

    •                      

    •                        

      unresolvedItem

                           

    •                      

    •                        

      tooManyItems

                           

    •                    

                     

                       

                          displayErrors                    

                     

                       

    Boolean

                     

                       

    True=Display the errors False=Don’t display the errors

                     

               

               

    The CSS classes for the People Picker widget are defined in the Office.Controls.css style sheet. You can override the classes and style the widget for your app.

               

    For more information, see How to: Use the experimental People Picker widget in apps for SharePoint and Use the People Picker experimental widget in an app code sample.

                    

                                     

             

               

                                                  Desktop List View widget                            

                                 

               

             

             

                                                     

    Your users can benefit from the List View widget and display the data in a list just like the regular List View widget, but you can use it in your apps that are not necessarily hosted in SharePoint.

               

    Figure 3. Desktop List View widget displaying the data in a list

               
                Desktop List View experimental control on a page            

    You can specify an existing view on the list, the widget renders the fields in the order that they appear in the view.

               

                                                                                                                                   

                        Note                     Note                  
                       

    At this moment, the Desktop List View widget only displays the data. It doesn’t offer editing capabilities.

                     

               

               

    You can provide a placeholder for the widget using a div element. You can programmatically or declaratively use the widget.

               

    You also can set properties or event handlers for the Desktop List View widget. The following table shows the available properties and events in the Desktop List View widget.

               

                             

               

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         

                       

    Property/Event

                     

                       

    Type

                     

                       

    Description

                     

                       

                          listUrl                    

                     

                       

    URL

                     

                       

    URL of the list view to draw items from. It can be a relative URL in which case it will be assumed to be located on the app web itself or an absolute URL.

                     

                       

                          viewName                    

                     

                       

    string

                     

                       

    Name of the view to show. This is the programmatic name of the view (not its display name).

                     

                       

                          onItemSelected                    

                     

                       

    Function

                     

                       

    Event that fires when an item is selected on the list.

                     

                       

                          onItemAdded                    

                     

                       

    Function

                     

                       

    Event that fires when a new item is added to the list.

                     

                       

                          onItemRemoved                    

                     

                       

    Function

                     

                       

    Event that fires when an item is removed from the list.

                     

                       

                          selectedItems                    

                     

                       

    Array

                     

                       

    List of Selected items in JSON format.

                     

               

               

    The widget requires the SharePoint website style sheet. You can reference the SharePoint style sheet directly or use the chrome widget. For more information about the style sheet, see How to: Use a SharePoint website’s style sheet in apps for SharePoint and How to: Use the client chrome control in apps for SharePoint.

               

    To see the List View widget in action, see the Use the Desktop List View experimental widget in an app code sample. Also see How to: Use the experimental Desktop List View widget in apps for .                    

    Widgets can help to speed up the development process and reduce the cost and time-to-market of your apps. Office Web Widgets – Experimental provide widgets that you can use in your non-production apps. Your feedback and comments are welcome in the Office Developer Platform UserVoice site.

                            

                                
     
     

    Create a new Search Service Application in SharePoint 2013 using PowerShell

    The search architecture in SharePoint 2013 has changed quite a bit when compared to SharePoint 2010. In fact the Search Service in SharePoint 2013 is completely overhauled. It is a combination of FAST Search and SharePoint Search components.

    apxvsdik

    As you can see the query and crawl topologies are merged into a single topology, simply called “Search topology”. Provisioning of the search service application creates 4 databases:

    • SP2013_Enterprise_Search – This is a search administration database. It contains configuration and topology information
    • SP2013_Enterprise_Search_AnalyticsReportingStore – This database stores the result of usage analysis
    • SP2013_Enterprise_Search_CrawlStore – The crawl database contains detailed tracking and historical information about crawled items
    • SP2013_Enterprise_Search_LinksStore – Stores the information extracted by the content processing component and also stores click-through information

    # Create a new Search Service Application in SharePoint 2013

    Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue

    Settings    $IndexLocation = “C:\Data\Search15Index” #Location must be empty, will be deleted during the process!     $SearchAppPoolName = “Search App Pool”     $SearchAppPoolAccountName = “Contoso\administrator”     $SearchServerName = (Get-ChildItem env:computername).value     $SearchServiceName = “Search15”     $SearchServiceProxyName = “Search15 Proxy”     $DatabaseName = “Search15_ADminDB”     Write-Host -ForegroundColor Yellow “Checking if Search Application Pool exists”     $SPAppPool = Get-SPServiceApplicationPool -Identity $SearchAppPoolName -ErrorAction SilentlyContinue

    if (!$SPAppPool)    {         Write-Host -ForegroundColor Green “Creating Search Application Pool”         $spAppPool = New-SPServiceApplicationPool -Name $SearchAppPoolName -Account $SearchAppPoolAccountName -Verbose     }

    Start Services search service instance    Write-host “Start Search Service instances….”     Start-SPEnterpriseSearchServiceInstance $SearchServerName -ErrorAction SilentlyContinue     Start-SPEnterpriseSearchQueryAndSiteSettingsServiceInstance $SearchServerName -ErrorAction SilentlyContinue

    Write-Host -ForegroundColor Yellow “Checking if Search Service Application exists”    $ServiceApplication = Get-SPEnterpriseSearchServiceApplication -Identity $SearchServiceName -ErrorAction SilentlyContinue

    if (!$ServiceApplication)    {         Write-Host -ForegroundColor Green “Creating Search Service Application”         $ServiceApplication = New-SPEnterpriseSearchServiceApplication -Partitioned -Name $SearchServiceName -ApplicationPool $spAppPool.Name  -DatabaseName $DatabaseName     }

    Write-Host -ForegroundColor Yellow “Checking if Search Service Application Proxy exists”    $Proxy = Get-SPEnterpriseSearchServiceApplicationProxy -Identity $SearchServiceProxyName -ErrorAction SilentlyContinue

    if (!$Proxy)    {         Write-Host -ForegroundColor Green “Creating Search Service Application Proxy”         New-SPEnterpriseSearchServiceApplicationProxy -Partitioned -Name $SearchServiceProxyName -SearchApplication $ServiceApplication     }

    $ServiceApplication.ActiveTopology     Write-Host $ServiceApplication.ActiveTopology

    Clone the default Topology (which is empty) and create a new one and then activate it    Write-Host “Configuring Search Component Topology….”     $clone = $ServiceApplication.ActiveTopology.Clone()     $SSI = Get-SPEnterpriseSearchServiceInstance -local     New-SPEnterpriseSearchAdminComponent –SearchTopology $clone -SearchServiceInstance $SSI     New-SPEnterpriseSearchContentProcessingComponent –SearchTopology $clone -SearchServiceInstance $SSI     New-SPEnterpriseSearchAnalyticsProcessingComponent –SearchTopology $clone -SearchServiceInstance $SSI     New-SPEnterpriseSearchCrawlComponent –SearchTopology $clone -SearchServiceInstance $SSI

    Remove-Item -Recurse -Force -LiteralPath $IndexLocation -ErrorAction SilentlyContinue    mkdir -Path $IndexLocation -Force

    New-SPEnterpriseSearchIndexComponent –SearchTopology $clone -SearchServiceInstance $SSI -RootDirectory $IndexLocation    New-SPEnterpriseSearchQueryProcessingComponent –SearchTopology $clone -SearchServiceInstance $SSI     $clone.Activate()

    Write-host “Your search service application $SearchServiceName is now ready”

    Update

    To configure failover server(s) for Search DBs, use the following PowerShell:

    Thanks to Marcel Jeanneau for sharing this!

    #Admin Database   $ssa = Get-SPEnterpriseSearchServiceApplication “Search Service Application”    Set-SPEnterpriseSearchServiceApplication –Identity $ssa –FailoverDatabaseServer <failoverserveralias\instance>

    #Crawl Database   $CrawlDatabase0 = ([array]($ssa | Get-SPEnterpriseSearchCrawlDatabase))[0]    Set-SPEnterpriseSearchCrawlDatabase -Identity $CrawlDatabase0 -SearchApplication $ssa -FailoverDatabaseServer <failoverserveralias\instance>

    #Links Database   $LinksDatabase0 = ([array]($ssa | Get-SPEnterpriseSearchLinksDatabase))[0]     Set-SPEnterpriseSearchLinksDatabase -Identity $LinksDatabase0 -SearchApplication $ssa -FailoverDatabaseServer <failoverserveralias\instance>

    #Analytics database   $AnalyticsDB = Get-SPDatabase –Identity     $AnalyticsDB.AddFailOverInstance(“failover alias\instance”)    $AnalyticsDB.Update()

    You can always change the default content access account using the following command:

    $password = Read-Host –AsSecureString**********Set-SPEnterpriseSearchService -id “SSA name” –DefaultContentAccessAccountName Contoso\account –DefaultContentAccessAccountPassword $password

    Look out for my Powershell Web Part and Google Analytics Web Part and App that is under Development and available soon for purchase!!

    Image

    SharePoint Samurai