Tag Archives: Tools

New Azure To TFS Deploy Tool Available

Deploy Windows Azure project directly from TFS 2010 Build Server

 

Build Definition Template
DeployToAzure allows automating deployment of Windows Azure project and making it a part of TFS 2010 build process without using PowerShell and Azure Management CmdLets.

Solution includes:

  • A set of custom workflow actions wrapping Azure Management API operations such as GetDeployment, GetOperationStatus, NewDeployment, RemoveDeployment and SetDeploymentStatus;
  • Helper actions such as FindPackageAndConfigurationFiles, LoadCertificate and WaitForOperationToComplete;
  • Designer activity DeployToAzure implementing deployment logic ;
  • Reusable build definition template.

How it works :

Create build definition

Open New Build Definition dialog. Select Process tab. Click New button in Build process template section. Choose Select an existing XAML file option and specify path to DefaultTemplateWithDeploymentToAzure.xaml in your source control.

processtemplate

Deployment to Azure section will appear in Build process parameters. Click Refresh button if you don’t see it.

d2asection

Now define build properties. First ,open 1. Required / Items to build dialog and select your solution and specify configuration to build.

Open Deployment to Azure section and provide following parameters:

  • API Certificate store location – store location of your management certificate. Select LocalMachine if certificate was created by command above.
  • API Certificate Thumbprint – thumbprint of management certificate.
  • API Certificate store – store where management certificate is located. Select Root if certificate was created by command above.
  • Cloud Project – cloud project to be packaged and deployed.  It will be built with the same configuration as the one specified for solution building.
  • Deployment label – label of deployment. Label can contain same set of macros as Build Label.
  • Hosted Service Name – DNS Prefix of Hosted service. You can find it on Windows Azure portal.
  • Service configuration – service configuration to be used for deployment, for example Cloud. Keep this field empty to use default configuration.
  • Slot – select Staging or Production.
  • Storage Service Name – DNS Prefix of storage service which will be used to upload deployment package.
  • Subscription Id – Azure subscription ID.
  • Wait for roles to start – set to true if build should wait for all instances to start.
  • Initialization Timeout – if above is true, specify timeout for build to wait before generate timeout exception.

Contact me at floyd_tomas@yahoo.com for this and other Azure, SharePoint, Office365,  TFS and Agile Tools

hero-for-hire_basic-layout_600micorosftazurelogo[1]

Advertisements

SharePoint Online: Software Boundaries, Limits and Planning Guide

This article describes some important limitations that you might need to know for different SharePoint Online plans in Office 365.
For example, it provides information about number of supported users, storage quotas, and file-size limits. This article covers a range of plans:
SharePoint Online in Office 365 Small Business and in Office 365 Enterprise, plus standalone plans.
The limits that are listed are for paid subscriptions. You might see different limits for trial plans andSharePoint Online preview sites. 

Note    In Office 365 plans, software boundaries and limits for SharePoint Online are managed separately from mailbox storage limits. Mailbox storage limits are set up and managed by using Exchange Online. For more information about how Exchange manages mailbox limits, see Mailbox types and storage limits for Recipients.

In this article

SharePointOnline2L-1[1]

 

SharePoint Online Feature availability

Need help determining which SharePoint solution best fits your organization’s needs?

The various Office 365 plans include different SharePoint Online offerings. These include:

  • SharePoint Online for Office 365 Small Business
  • SharePoint Online for Office 365 Midsize Business
  • SharePoint Online for Office 365 Enterprise, Education, and Government

You can choose the plan that best fits your organization’s needs. Each person who accesses the SharePoint Online service must be assigned to a subscription plan. SharePoint Online can be included in a Microsoft Office 365 plan, or it can be purchased as a standalone plan, such as SharePoint Enterprise Plan 1 or SharePoint Enterprise Plan 2.

Limits in SharePoint Online in Office 365 plans

In this section:

Limits for SharePoint Online for Office 365 Small Business

SharePoint Online Small Business and SharePoint Online Small Business Premium have common boundaries and limits. The following table describes those limits.

Feature Description
Storage per user (contributes to total storage base of tenant) 500 megabytes (MB) per subscribed user.
Site collection quota limit Up to 1 TB per site collection. (25 GB for a trial).

5,000 items in site libraries, including files and folders.

The minimum storage allocation per site collection is 100 MB.

Site collections (#) per tenant 1 site collection per tenant.
Subsites Up to 2,000 subsites per site collection
Total available tenant storage 10 GB + 500 MB per user.

For example, if you have 10 users, the base storage allocation is 15 GB (10 GB + 500 MB * 10 users).

You can purchase additional storage up to a maximum of 1TB.

Personal site storage 1 TB per user, as soon as provisioned.

This amount is counted separately, and does not add to or subtract from the overall storage allocation for a tenant. Personal site storage applies to a user’s OneDrive for Business library and personal newsfeed. For more information, see Additional information about OneDrive for Business limits.

Public Website storage default 5 GB

A SharePoint admin can allocate up to 1 TB (the limit for a site collection).

File upload limit 2 GB per file.
File attachment size limit 250 MB
Sync limits 20,000 items in the OneDrive for Business library, including files and folders.

5,000 items in site libraries, including files and folders.

Number of users 1 – 25 users
Number of external users invitees There is no limit to number of external users you can invite to your SharePoint Online site Collections. For more information, see Manage external sharing for your SharePoint Online environment

When reviewing the information on the previous table, remember that the base storage limits for Office 365 for Small Business (10 GB + 500 MB per subscribed user) will affect some of these values. For example, although SharePoint Online for Small Business imposes a limit of 1 TB per site collection, your particular tenant might not have enough storage available to contain a site collection of 1 TB.

 

 Important    It’s a good idea to monitor the Recycle Bin and empty it regularly. Content in the Recycle Bin is counted against the storage quota for a tenant. For example, if the Recycle Bin on a site contains 5 GB of content, that 5 GB is subtracted from the available storage.

 

Limits for SharePoint Online for Office 365 Midsize Business

The following table shows the software boundaries and limits for the SharePoint Online Midsize Business plan.

Feature Description
Storage per user (contributes to total storage base of tenant) 500 megabytes (MB) per subscribed user.
Storage base per tenant 10 GB + 500 MB per subscribed user.

For example, if you have 250 users, the base storage allocation is 135 GB (10 GB + 500 MB * 250 users).

You can purchase additional storage up to a maximum of 20 TB.

Additional storage at a cost per GB per month. To buy storage, see Change storage space for your subscription.

 Important    You can’t buy additional storage for a trial subscription.

Site collection quota limit Up to 1 TB per site collection. (25 GB for a trial).

5,000 items in site libraries, including files and folders.

SharePoint admins can set storage limits for site collections and sites. The minimum storage allocation per site collection is 100 MB.

Site collections (#) per tenant 20 site collections (other than personal sites).
Subsites Up to 2,000 subsites per site collection.
Personal site storage 1TB per user, as soon as provisioned.

Personal site storage applies to a user’s OneDrive for Business library and personal newsfeed. This amount is counted separately, and does not add to or subtract the overall storage allocation for a tenant. For more information about OneDrive for Business, see Additional information about OneDrive for Business limits later in this article.

Public Website storage default 5 GB

A SharePoint admin can allocate up to 1 TB (the limit for a site collection).

File upload limit 2 GB per file.
File attachment size limit 250 MB
Sync limits 20,000 items in the OneDrive for Business library, including files and folders.

5,000 items in site libraries, including files and folders.

Number of users 1 – 250 users
Number of external user invitees There is no limit to number of external users you can invite to your SharePoint Online site Collections. For more information see, Manage external sharing for your SharePoint Online environment

When reviewing the information on the previous table, remember that the base storage limits for Office 365 for Midsize Business (10 GB + 500 MB per subscribed user) will affect some of these values. For example, although SharePoint Online for Midsize Business imposes a limit of 1 TB per site collection and a limit of 20 site collections, your particular tenant might not have enough storage available to contain 20 site collections of 1 TB each.

 Important    It’s a good idea to monitor the Recycle Bin and empty it regularly. Content in the Recycle Bin is counted against the storage quota for a tenant. For example, if the Recycle Bin on a site contains 25 GB of content, that 25 GB is subtracted from the available storage.

 

 

Limits for SharePoint Online for Office 365 Enterprise, Education, and Government

One or more Office 365 subscriptions plans can be included as part of your subscription. This is true for the following plan offerings:

  • Microsoft Office 365 Enterprise subscriptions (E1 – E4)
  • Microsoft Office 365 Government subscriptions (G1 – G4)
  • Microsoft Office 365 Education subscriptions (A2 – A4)
  • Microsoft Office 365 Kiosk subscriptions (K1-K2)
  • SharePoint Online stand-alone subscription plans (Plan 1 and Plan 2).

 

These plans have common boundaries and limits. The following table describes those limits.

 

 

Feature Office 365 Enterprise plans (including E1 – E4, A2-A4, G1-G4, and SharePoint Online Plan 1 and Plan 2) Office 365 Kiosk plans (Enterprise and Government K1 – K2)
Storage per user (contributes to total storage base of tenant) 500 megabytes (MB) per subscribed user. Zero (0).

Licensed Kiosk Workers do not add to the tenant storage base.

Additional storage (per GB per month); no minimum purchase To buy storage, see Change storage space for your subscription.

 Important    You can’t buy additional storage for a trial subscription.

To buy storage, see Change storage space for your subscription.

 Important    You can’t buy additional storage for a trial subscription.

Storage base per tenant 10 GB + 500 MB per subscribed user + additional storage purchased.

For example, if you have 10,000 users, the base storage allocation is approximately 5 TB (10 GB + 500 MB * 10,000 users).

You can purchase an unlimited amount of additional storage.

 Important    If you have a Government Community Cloud plan, you can purchase additional storage up to 25 TB.

10 GB + additional storage purchased.

You can purchase an unlimited amount of additional storage.

 Important    If you have a Government Community Cloud plan, you can purchase additional storage up to 25 TB.

Site collection storage limit Up to 1 TB per site collection. (25 GB for trial).

SharePoint admins can set storage limits for site collections and sites. The minimum storage allocation per site collection is 100 MB.

5,000 items in site libraries, including files and folders.

 Important    If you have a Government Community Cloud plan, the limit is 100 GB.

Up to 1 TB per site collection. (25 GB for a trial). SharePoint admins can set storage limits for site collections and sites. The minimum storage allocation per site collection is 100 MB.

 Important    If you have a Government Community Cloud plan, the limit is 100 GB.

Kiosk workers (plans K1-K2) cannot administer SharePoint site collections. You will need a license for at least one Enterprise plan user to manage Kiosk site collections.

Site collections (#) per tenant 500,000 site collections (other than personal sites). 500,000 site collections.
Subsites Up to 2,000 subsites per site collection Up to 2,000 subsites per site collection
Personal site storage 1 TB per user (100 GB for government plans), as soon as provisioned.

Personal site storage applies to a user’s OneDrive for Business library and personal newsfeed. This amount is counted separately, and does not add to or subtract the overall storage allocation for a tenant.

For more information about OneDrive for Business, see Additional information about OneDrive for Business limits later in this article.

Not available.
Public Website storage default 5 GB

A SharePoint admin can allocate up to 1 TB (the limit for a site collection).

5 GB

A SharePoint admin can allocate up to 1 TB (the limit for a site collection).

Kiosk workers (plans K1-K2) cannot administer Sharepoint site collections. You will need a license for at least one Enterprise plan user to manage Kiosk site collections.

File upload limit 2 GB per file. 2 GB per file.
File attachment size limit 250 MB 250 MB
Sync limits 20,000 items in the OneDrive for Business library, including files and folders.

5,000 items in site libraries, including files and folders.

20,000 items in the OneDrive for Business library, including files and folders.

5,000 items in site libraries, including files and folders.

Maximum number of users per tenant 1 – 500,000+

 Note    If you have more than 500,000 users, please contact the Microsoft representative to discuss detailed requirements.

1 – 500,000+

 Note    If you have more than 500,000 users, please contact the Microsoft representative to discuss detailed requirements.

Number of external user invitees There is no limit to number of external users you can invite to your SharePoint Online site Collections. For more information, see Manage external sharing for your SharePoint Online environment There is no limit to number of external users you can invite to your SharePoint Online site Collections. For more information, see Manage external sharing for your SharePoint Online environment

When reviewing the information on the previous table, remember that the base storage limits for Office 365 for Enterprises (10 GB + 500 MB per subscribed user) will affect some of these values. For example, although SharePoint Online for Enterprise plans imposes a limit of 1 TB per site collection and a limit of 500,000 site collections, your particular tenant might not have enough storage available to contain 500,000 site collections of 1 TB each.

 Important    It’s a good idea to monitor the Recycle Bin and empty it regularly. Content in the Recycle Bin is counted against the storage quota for a tenant. For example, if the Recycle Bin on a site contains 25 GB of content, that 25 GB is subtracted from the available storage.

 

 

Limits for site elements in SharePoint Online

There are also limits for site elements of a SharePoint Online site. Here are some examples:

  • List and Library limits    Different types of columns have different limitations. For example, you can have up to 276 columns in a list for columns that contain a single line of text.
  • Page limits    You can add up to 25 Web Parts to a single wiki or web page.
  • Security limits    Different security features have different limits. For example, a single user can belong to no more than 5,000 security groups.

 

The specific elements for the previous site elements are too numerous to list here, but you can learn more about them in the TechNet article Software Boundaries and Limits for SharePoint 2013. In this linked article, only the sections on List and Library Limits, Page Limits, and Security Limits apply to SharePoint Onl

 

Additional information about OneDrive for Business limits

Each user in SharePoint Online for Office 365 gets an individual storage allocation of 1 TB for personal site content (100 GB for government plans). Personal sites include the user’s OneDrive for Business library, a Recycle Bin, and personal newsfeed information.

All SharePoint Online in Office 365 plans include the same storage allocation for individual personal sites. This storage allocation is separate from the tenant allocation.

For more information about how users can manage their individual OneDrive for Business allocation, see OneDrive for Business library limits.

 

 

Additional Resources

 

For information about this: Go here:
Office 365 connectivity limits To learn more about Internet bandwidth, port and protocol considerations for Office 365 plans, see Office 365 Ports and Protocols.
SharePoint feature availability To learn more about SharePoint feature availability and the SharePoint Online service in Office 365, see SharePoint Online Service Descriptions.
SharePoint Online search limits To learn more about the search limits for SharePoint Online, see Search limits for SharePoint Online.
Mobile devices To learn more about opening a SharePoint Online site from a mobile device, see Use a mobile device to work with SharePoint Online sites.
File types To learn about file types that you can’t add to a list, see Types of files that cannot be added to a list or library.
Online URLs To learn about SharePoint Online addresses, see SharePoint Online URLs and IP Addresses.
Site languages To learn how to set language for your sites, see Change your language and region settings.
Planning and deploying SharePoint Online
Change storage space

 Important    You can’t buy additional storage for a trial subscription.

Understanding Business Intelligence

Though Business Intelligence is technology driven, it is more about Business requirements and less about technology.  BI Champion/ Sponsor in the organization defines the vision and mission.

BI-infographic1[1]

The leader must have the ability to precisely define the Business intelligence requirements of the organization; the format of the reports; the relationship between the different data elements and version of the data to be used. The leaders need to specify how much history needs to be included; how often the data needs to be provided to the different stakeholders of the process.

BI is then driven by the business objective which may not necessarily always be to reduce the cost or increase the top line/ bottom line for an organization, but to use the data analysis in bringing efficiencies in the process or enhance the service experience for the customer.

The impetus for the BI initiative is the availability of data and technology for data analysis.  Ideally the organization should have at least two or three years of data in electronic format for meaningful analysis. The larger the volume of core data available in the systems, the more meaningful will be the output that can be expected from Business Intelligence.

However it must be recognized that data may not always be structured and data coming from multiple sources may not be in the same format. Data will have to be interpreted on the basis of logical assumptions and data gaps will have to be identified upfront so that changes can be made to business processes at the point of data capture.

The Management must be made aware that resource and time commitment for BI is much higher than resource commitment for IT projects and there must be a high level of management commitment to making the BI project a success.

  1. Business leaders and IT analysts will have to allocate substantial amount of time to mapping business requirements to IT system capabilities both during the design and implementation phase of the project.  The leaders and IT developers will have to constantly interact for a proper representation of the Business questions in terms of analytical outputs.

2.      IT professionals involved in the process will have to explain to the Business leadership the nature and meaning of the data that is available in the systems. They will have to clarify how exceptions are to be handled by the leadership and also the limitations of the systems. The implication is that IT resources will also have to make a time commitment for BI.

  1. Subject matter experts also have a significant role to play in Business Intelligence projects.  As ultimate users of the system, they are in the best position to test the outputs and validate the significance of the Queries and the outputs derived from such queries.  They have the experience and the ability to flag exceptions and help fix them.  This implies that subject matter experts too need to contribute a significant amount of time for the success of the project.  They may even need to join the project team on a full time basis during the testing phase.

The Project plan must reflect the level of business personnel who will be involved and the organization must be willing to release these people to join the project team as and when a request for their services is made.

It also follows that the project team and all those involved in BI must be open to learning and acquiring the skills that are essential for the effective functioning of the BI system that is being put in place.

Having said all this, it is necessary to point out that Business Intelligence cannot be a time bound project. Nor can the teams be disbanded with the first successful run of a data query or queries.

Query design, testing, redesign and use of new toolsets are inevitable. In short, BI is an evolving system that cannot be pinned down and bounded by traditional project management definitions.

The BI requirements change as business requirements evolve and change. No single requirement definition can be characterized as a permanent, unchanging requirement.  The Business leaders, IT analysts and Subject matter experts will have to be constantly engaged in designing and developing queries; testing the outputs on field formations; obtaining feedback on the usefulness or otherwise of the outputs and exceptions that need to be handled.  It is an iterative process.

It requires the institution of an agile system development and process management approach. Highly skilled personnel must constantly and continuously work together to deliver on the objectives of BI with little requirements being defined upfront with more requirements being designed and refined on the go.

Scope of work will have to be “time-boxed” to each cycle of the project and goals and objectives of each time box will have to be specified separately.  Cycles which cannot be completed within the time specified will have to be deferred and included as part of the future development cycles. As a result, traditional project management methodologies will fail.

Since change is the only constant in BI, definition of a change management strategy is an imperative. The strategy must be built around the recognition that Business requirements change, changing BI requirements/queries; resulting in a change in the type of toolsets used and the skillsets that are required by BI stakeholders.

It should be remembered that People are resistant to change. Consumers of BI reports are no exceptions. They need to be educated about the benefits of the exercise and the producers of the data must be aligned to ensure data quality is never compromised by placing appropriate controls in the systems.

Training needs must be studied, documented; trainings organized whenever there is a change in any one or more components that operationalize the Business Intelligence unit.

The organization must also recognize that reporting requirements and formats will change with every change in the BI requirement. All reporting formats cannot be axed at once and all reporting formats cannot change overnight. The change must be planned and initiated based on schedules that have been agreed upon by the different stakeholders.

Existing reports and tools should be retired gradually and transition periods must be orchestrated carefully and thoughtfully. Trainings must be organized to transition all stakeholders to the new formats.  Organization of workshops and change management seminars must be part and parcel of the Business Intelligence unit’s functioning.

Finally, it must be reiterated that BI is not a project. It is a program.  The solution must dovetail into the existing environment and reinforce the business processes that are in use.  Any data extraction exercise for BI must be done without disturbing the workflow in the organization or impacting the reliability of the information that is being gathered during business operations.

A Look At : Visual Studio Codelens

A Visual Studio Full of Panels

Let’s say you’re looking at a code file, specifically a method.  Your Visual Studio environment may look like this:

image

I’m looking at the second Create method (the one that takes a Customer).  If I want to know where this method may be referenced, I can “Find All References”, either by selecting it from the context menu, or using Shift + F12. Now I have this:

image

Great!  Now, if I decide to change this code, will it will work?  Will my tests still work?  In order for me to figure that out, I need open my Test Explorer window.

image

Which gives me a slightly more cluttered VS environment:

image

(Now I can see my tests, but I still need to try and identify which tests actually exercise my method.)

Another great point of context to have is knowing if I’m looking at the latest version of my code.  I’d hate to make changes to an out-of-date version and grant myself a merge condition.  So next I need to see the history of the file.

image

Cluttering my environment even more (because I don’t want to take my eyes of my code, I need to snap it somewhere else), I get this:

image

Okay, time out.

Yes, this looks pretty cluttered, but I can organize my panels better, right?  I can move some panels to a second monitor if I want, right?  Right on both counts.  By doing so, I can get a multi-faceted view of the code I’m looking at.  However, what if I start looking at another method, or another file?  The “context” of those other panels don’t follow what I’m doing.  Therefore, if I open the EmployeesController.cs file, my “views” are out of sync!

image

That’s not fun.

A Visual Studio Full of Context

So all of the above illustrates two main benefits of something like CodeLens.  CodeLens inserts easy, powerful, at-a-glance context for the code your looking at.  If it’s not turned on, do so in Options:

image

While you’re there, look at all the information it’s going to give you!

Once you’ve enabled CodeLens, let’s reset to the top of our scenario and see what we have:

image

Notice an “overlay” line of text above each method.  That’s CodeLens in action. Each piece of information is called a CodeLens Indicator, and provides specific contextual information about the code you’re looking at.  Let’s look more closely.

image

References

image

References shows you exactly that – references to this method of code.  Click on that indicator and you can see and do some terrific things:

image

It shows you the references to this method, where those references are, and even allows you to display those references on a Code Map:

image

Tests

image

As you can imagine, this shows you tests for this method.  This is extremely helpful in understanding the viability of a code change.  This indicator lets you view the tests for this method, interrogate them, as well as run them.

image

As an example, if I double-click the failing test, it will open the test for me.  In that file, CodeLens will inform me of the error:

image

Dramatic pause: This CodeLens indicator is tremendously valuable in a TDD (Test Driven Development). Imagine sitting your test file and code file side-by-side, turning on “Run Tests After Build”, and using the CodeLens indicator to get immediate feedback about your progress.

Authors

image

This indicator gives you very similar information as the next one, but list the authors of this method for at-a-glance context.  Note that the latest author is the one noted in the CodeLens overlay.  Clicking on this indicator provides several options, which I’ll explain in the next section.

image

Changes

image

The Changes indicator tells you information about the history of the file at it exists in TFS, specifically Changesets.  First, the overlay tells you how many recent changes there are to this method in the current working branch.  Second, if you click on the indicator you’ll see there are several valuable actions you can take right from that context:

image

What are we looking at?

  • Recent check-in history of the file, including Changeset ID, Branch, Changeset comments, Changeset author, and Date/time.
  • Status of my file compared to history (notice the blue “Local Version” tag telling me that my code is 1 version behind current).
  • Branch icons tell me where each change came from (current/parent/child/peer branch, farther branch, or merge from parent/child/unrelated (baseless)).

Right-clicking on a version of the file gives you additional options:

image

  • I can compare my working/local version against the selected version
  • I can open the full details of the Changeset
  • I can track the Changeset visually
  • I can get a specific version of the file
  • I can even email the author of that version of the file
  • (Not shown) If I’m using Lync, I can also collaborate with the author via IM, video, etc.

This is a heck of a lot easier way to understand the churn or velocity of this code.

Incoming Changes

image

The Incoming Changes indicator was added in 2013 Update 2, and gives you a heads up about changes occurring in other branches by other developers.  Clicking on it gives you information like:

image

Selecting the Changeset gives you the same options as the Authors and Changes indicators.

This indicator has a strong moral for anyone who’s ever been burned by having to merge a bunch of stuff as part of a forward or reverse integration exercise:  If you see an incoming change, check in first!

Work Items (Bugs, Work Items, Code Reviews)

image

I’m lumping these last indicators together because they are effectively filtered views of the same larger content: work items.  Each of these indicators give you information about work items linked to the code in TFS.

image

image

Knowing if/when there were code reviews performed, tasks or bugs linked, etc., provides fantastic insight about how the code came to be.  It answers the “how” and “why” of the code’s current incarnation.

 

A couple final notes:

  • The indicators are cached so they don’t put unnecessary load on your machine.  As such they are scheduled to refresh at specific intervals.  If you don’t want to wait, you can refresh the indicators yourself by right-clicking the indicators and choosing “Refresh CodeLens Team Indicators”

image

  • There is an additional CodeLens indicator in the Visual Studio Gallery – the Code Health Indicator. It gives method maintainability numbers so you can see how your changes are affecting the overall maintainability of your code.
  • You can dock the CodeLens indicators as well – just know that if they dock, they act like other panels and will be static.  This means you’ll have to refresh them manually (this probably applies most to the References indicator).
  • If you want to adjust the font colors and sizes (perhaps to save screen real estate), you can do so in Tools –> Options –> Fonts and Colors.  Choose “Show settings for” and set it to “CodeLens”.

HTML5 SharePoint Pic Web Part Released and Available !!

This is a Sandbox web part control to display a matrix of image thumbnails.

For a build a Metro IDE or a Picture Gallery to show products, news, or a social team that integrates with pictures, etc. All this, from any SharePoint picture library.

Supports : SharePoint 2010 & 2013 On-Premise Web Part,  SharePoint Online Web Part

FEATURES OF THE WEB PART** ver. 1.0

     

**PREVIEW EXAMPLE OF THE CONTROL**





 
1

How To : Design the Physical Architecture to Support Collaborative Development and ALM of SharePoint Foundation 2010 Application

Introduction

This article explains the physical architecture which fits best in collaborative development and ALM of SharePoint Foundation 2010 application and what are the servers and tools needed and how they play key roles in ALM of SharePoint Foundation 2010. The purpose of this article is to provide overall understanding of various servers and farms connected to each other in SharePoint Foundation.

Background

Basic understanding of different server OS & SharePoint Foundation 2010 is required.

Solution

Application Life-cycle Management (ALM) is the co-ordination of development life-cycle activities—including requirements, modeling, development, build, and testing. Recently, ALM has expanded beyond the application and the software development life cycle to also include business solution governance, infrastructure management, operations, and support.

You can use ALM to help align your organization in the context of a software solution in business, development, and operations. With an application development platform that supports ALM, you can provide integration between the various tools used and activities performed within each of these capabilities.

There are main four types of staging servers with standalone developer’s environment which plays a key role in ALM of SharePoint 2010 application:

  1. Development SharePoint Farm
  2. Team foundation server
  3. Integration/Testing Farm
  4. Production Farm
    +
    Developer’s Workstation

The below figure is a physical architecture which depicts how each sever is interconnected to support collaborative development and ALM for SharePoint Foundation 2010 application:

Click to enlarge image

Development SharePoint Farm

A SharePoint farm is fundamentally a collection of SharePoint role servers that provide for the base infrastructure required to house SharePoint sites. The farm level is the highest level of SharePoint architecture, providing a distinct operational boundary for a SharePoint environment. Each farm in an environment is a self-encompassing unit made up of one or more servers, such as web servers, service application servers, and SharePoint database servers.

SharePoint development farm needed for the developers in an organization that makes heavy use of SharePoint often need environments to test new applications, web parts, solutions, and other SharePoint customization. These developers often need a sandbox area where these farm level features and solutions can be tested.

I have considered two-tier topology for SharePoint Foundation 2010 farm. However it will be entirely based on the need of your application. If your application is a relatively small intranet application, then you can choose single tier topology or if you are going to integrate other search server with foundation, then you can choose three-tier topology with application server as a middle tier (Remember that SharePoint Foundation 2010 doesn’t include enterprise search). It may make sense to deploy one or more development farms so that developers have the opportunity to run their tests and develop software for SharePoint independent of the existing production environment.

There are basically two types of servers included in two-tier development farm of SharePoint Foundation 2010:

  1. Web server
  2. Content database server

In the above figure, there are three front-end web servers and one SharePoint content database server. However you can choose a single front-end web server connected to content database server based on your application need and architecture of production environment. All web servers share the same content database. This is called two-tier deployment farm where SharePoint server component and content database are installed on separate server. As I mentioned before, you can choose one-tier, two-tier or three-tier deployment topology based on your application architecture and topology of production architecture.

Each web server has SharePoint Foundation 2010 and SharePoint extension for TFS 2010 install on it. It needs SharePoint extension for TFS 2010 to connect with Team Foundation Server for source control, build management & project management.

Advantage of Development SharePoint Farm:

  1. Single place where SharePoint Admin can integrate all the final artifacts from multiple developers.
  2. Developer can sync with latest SharePoint site on its standalone developer workstation.
  3. Admin can easily approve artifacts and migrate to integration server.
  4. It is a unit testing environment for developers where they can test dependent functionality or farm level features.

Team Foundation Server

Team Foundation Server plays a key role in ALM which provides source control, build management and work item. You can have TFS installed on the same server which has content database server but if you are going to use build management of TFS, then it is advisable to have separate Team Foundation Server because it utilizes CPU intensively when it processes the builds.

As per the above figure, there are separate Team foundation servers which are connected to SharePoint Farm as well as standalone development workstation so that it can provide source control for customized content as well as developer’s artifacts and resources.

Advantages of TFS
  1. Source control for SharePoint artifacts and customization
  2. Build management for SharePoint
  3. Work item and bug tracking tool for SharePoint
  4. Admin console for all management activity
  5. Easy integration with SharePoint foundation server and VS 2010
  6. Easy check-in & check-out
  7. Web based console to manage ALM activity

Developer’s Workstation

As per the above figure, developers’ environment includes two developers workstation. In practice, you can take as many workstations as your development team size.

Developer workstation should have Windows 7 or Windows vista operating system with standalone SharePoint foundation server with local content database. So that one developer’s work doesn’t affect another developer and he can debug artifacts locally.

Developer workstation will include the following stuff installed:

  1. Windows 7 or Windows vista 64 bit OS
  2. Stand alone SharePoint Foundation server 2010
  3. SharePoint designer 2010
  4. Visual Studio 2010 (connected to TFS)

Developer workstation should be connected to Team Foundation Server 2010 so that when developer finally completes his artifact, then he can check-in his artifact in TFS so that other developers can take the latest code from TFS if needed. This way, parallel development can happen without affecting other developer’s work.

Integration/Testing Farm

Any production SharePoint environment should have a test environment in which new SharePoint web parts, solutions, service packs, patches, and add-ons can be tested. It is critical to deploy test farms, because many SharePoint add-ons could potentially disrupt or corrupt the formatting or structure of a production environment, and trying to test these new solutions on site collections or different web applications is not enough because the solutions often install directly on the SharePoint servers themselves. If there is an issue, the issue will be reflected in the entire farm.

Integration or testing server farm should be similar to the existing environments, with the same add-ons and solutions installed and should ideally include restores of production site collections to make it as similar as possible to the existing production environment. All changes and new products or solutions installed into an environment should subsequently be tested first in this environment.

Integration/testing servers will have final SharePoint sites and site collection as per the business requirements. QA will test all the business functionality here. Customer can also do their ‘User acceptance test’ before going live to the production server.

After user acceptance test passed, all the sites & site collection will be deployed on production server.

Advantage of Integration testing server:

  1. Clean environments and same physical architecture as production
  2. QA can test all dependent business functionality at one place
  3. Customer can participate in UAT
  4. Easy deployment/migration from integration testing server to production server

Production Farm

The final stage is rolling your farm into a production environment. At this stage, you will have incorporated the necessary solution and infrastructure adjustments that were identified during the user acceptance test stage. These servers are generally in the customer’s premises. Development team and testing team do not have control over it.

There are various 3rd party tools available in the market for SharePoint data protection, administration, migration, compliance and integration.

ImageGen[1]

Summary

So this way, you can design physical architecture where Development SharePoint Farm and developer’s workstation are integrated with TFS 2010. TFS and Content database are connected to testing server or testing farm where all the artifacts and content will be integrated in testing server for QA and UAT. Finally after UAT, it will be deployed on production farm.

You can use VM (Virtual Machine) for all the servers and workstation for effective infrastructure because if server crashes due to some reason, then you can quickly create a new VM for the needed OS from images.

Note: In the above figure, integration/Testing farm and production farm is a single server just for clear understanding but it will be as large as development farm with number of front-end web server and content database server in reality. All the server OS is Windows Server 2008 R2 SP2 64 bit. Please visit here for more information on hardware & software requirements for SharePoint Foundation 2010.

How Microsoft’s Research Team is making Testing and the use of Pex & Moles Fun and Interesting

Try it out on the web

Go to www.pex4fun.com, and click Learn to start tutorials.

Main Publication to cite

Nikolai Tillmann, Jonathan De Halleux, Tao Xie, Sumit Gulwani, and Judith Bishop, Teaching and Learning Programming and Software Engineering via Interactive Gaming, in Proc. 35th International Conference on Software Engineering (ICSE 2013), Software Engineering Education (SEE), May 2013

 

Massive Open Online Courses (MOOCs) have recently gained high popularity among various universities and even in global societies. A critical factor for their success in teaching and learning effectiveness is assignment grading. Traditional ways of assignment grading are not scalable and do not give timely or interactive feedback to students.

 

To address these issues, we present an interactive-gaming-based teaching and learning platform called Pex4Fun. Pex4Fun is a browser-based teaching and learning environment targeting teachers and students for introductory to advanced programming or software engineering courses. At the core of the platform is an automated grading engine based on symbolic execution.

 

In Pex4Fun, teachers can create virtual classrooms, customize existing courses, and publish new learning material including learning games. Pex4Fun was released to the public in June 2010 and since then the number of attempts made by users to solve games has reached over one million.

 

Our work on Pex4Fun illustrates that a sophisticated software engineering technique – automated test generation – can be successfully used to underpin automatic grading in an online programming system that can scale to hundreds of thousands of users.

 

 

Code Hunt is an educational coding game.

Play win levels, earn points!

Analyze with the capture code button

Code Hunt is a game! The player, the code hunter, has to discover missing code fragments. The player wins points for each level won with extra bonus for elegant solutions.

Code in Java or C#

Discover a code fragment

Play in Java or C#… or in both! Code Hunt allows you to play in those two curly-brace languages. Code Hunt provides a rich editing experience with syntax coloring, squiggles, search and keyboard shortcuts.

Learn algorithms

Discover a code fragment

As players progresses the sectors, they learn about arithmetic operators, conditional statements, loops, strings, search algorithms and more. Code Hunt is a great tool to build or sharpen your algorithm skills. Starting from simple problems, Code Hunt provides fun for the most skilled coders.

Graded for correctness and quality

Modify the code to match the code fragment

At the core of the game experience is an automated grading engine based on dynamic symbolic execution. The grading engine automatically analyzes the user code and the secret code to generate the result table.

MOOCs with Office Mix

Add Code Hunt to your presentations

Code Hunt can included in any PowerPoint presentation and publish as an Office Mix Online Lesson. Use this PowerPoint template to create Code Hunt-themed presentations.

Web no installs, it just works

It just works

Code Hunt runs in most modern browsers including Internet Explorer 10, 11 and recent versions of Chrome and Firefox. Yup, it works on iPad.

Extras play your own levels

Play your own levels

Extra Zones with new sectors and levels can be created and reused. Read designer usage manual to create your own zone.

Compete so you think you can code

Compete

Code Hunt can be used to run small scale or large scale, private or public, coding competition. Each competition gets its own set of sectors and levels and its own leaderboard to determine the outcome. Please contact codehunt@microsoft.com for more information about running your own competition using Code Hunt.

Credits the team

Capture the working code fragment

Code Hunt was developed by the Research in Software Engineering (RiSE) group and Connections group at Microsoft Research. Go to our Microsoft Research page to find a list of publications around Code Hunt.

A Look At : Visual Studio 2013 Update 3 CTP2

avatar[2]

New technology improvements in Visual Studio 2013 Update 3 CTP 2

 

Technology improvements

The following technology improvements were made in this release.

CodeLens

  • CodeLens jobs that are running on the Team Foundation Server job agent have been optimized for performance specifically while processing branching and merging changesets.

Debugger

  • If you have more than one monitor, Visual Studio will remember which monitor a Windows Store application was last run on.
  • You can debug x86 applications that are built by .NET native.
  • When you analyze managed memory dump files, you can go to Definition and Find All References of the selected type.
  • You can debug the dump files from .NET Native applications by using Visual Studio debugger.

General

  • The Application Insights Tools for Visual Studio are now included in Visual Studio 2013 Update 3 CTP2. This initial integration as part of CTP2 includes some software updates and performance improvements.

IntelliTrace

  • You can skip straight to the details of performance events that are exported from Application Insights to IntelliTrace.

Profiler

  • The Performance and Diagnostics hub can open profiling sessions (.diagsession files) that were exported from the F12 tools in the latest developer preview of Internet Explorer 11.
  • Windows Presentation Foundation (WPF) and Win32 applications are supported by the new Memory Usage Tool in the Performance and Diagnostics Hub. For more information about how to use the tool to troubleshoot issues in native and managed memory, go to the following blog post:
    Diagnosing memory issues with the new Memory Usage Tool in Visual Studio

Release Management

  • You can useWindowsPowerShell or theWindowsPowerShell Desired State Configuration (DSC) feature to deploy and manage configuration data. Additionally, you can deploy to the following environments without having to set up Microsoft Deployment Agent:
    • Microsoft Azure environments
    • On-premise environments (Standard environments)

Testing Tools

  • You can add custom fields and custom work flows for test plans and test suites.
  • You can use Manage Test Suites permission for granting access to test suites.
  • You can track changes to test plans and test suites by using work item history.

For more information about these features, see the following Visual Studio Developer Tools blog article:

Test Plan and Test Suite Customization with TFS2013 Update3

Visual Studio IDE

  • CodeLens authors and changes indicators are now available for Git repositories.
  • In Code Map, links are styled by using colors, and they display in the improved Legend.
  • Debugger Map automatically zooms to the call stack entry of interest and preserves user’s zoom preferences.
  • You can drag binaries from the Windows file explorer to a code map, and then start exploring binaries by using Code Map.

Known issues

Testing Tools

  • When you try to upgrade an existing TFS server that has Test management data to Visual Studio 2013 Team Foundation Server Update 3 CTP2 in JPN or CHS, the upgrade of Test Case Management service does not work.

Visual Studio IDE

  • In Visual Studio 2013 Ultimate Update 3 CTP2 localized (non en-us) drops, when trying to request a Code Map, or a Dependency Graph for the solution, the directed graph is not produced.

 

For more information on Visual Studio 2013 and other upgrades, visit http://support.microsoft.com/kb/2933779/en-us

Creating Your Own Document Management System With SharePoint and Dynamix

Image

With the R2 release of Dynamics AX 2012, a new feature was quietly snuck into the product that allows you to store document attachments from Dynamics AX within SharePoint rather than within an archive location, or within the database. This opens up a whole slew of possibilities when it comes to document management within SharePoint.

In this example we will show how you can create a document management structure within SharePoint that you can use in conjunction with the Dynamics AX attachments feature, and also we will show a few tweaks that you can make that may make managing your documents within SharePoint just a little easier.

Creating a new Document Management Site

The first step that we are going to work through is the creation of a new Document Management site where we will put all of our Dynamics AX document attachments. We are just creating a site to separate out the documents from other items that you may already have stored within SharePoint.

How to do it…

To create a new Document Management Site in SharePoint, follow these steps:

  1. Open up the SharePoint Workspace that you want to use to house your Document Management site and from the Site Actions menu, select the New Site option.
  2. When the Site Templates are displayed, select the Blank Site template. Give your site a name, and also a sub site name (probably the same as your site name). When you are done, click on the Create button for start the site creation process.

How it works…

When the site is created, you should be taken to a new blank site which you will be able to use as a document repository.

Creating Document Libraries for the Business Areas

The next step in the process is to create document libraries to store all of your documents away in. You could create one big library, or a number of smaller ones, broken out into groups based on business area or function. In this example we will do the latter because it will give us more flexibility with the indexing of the documents, and also make it easier to find particular documents.

How to do it…

To create document libraries for the business areas, follow these steps:

  1. From within your new Document Management site, select the New Document Library option from the Site Actions menu.
  2. When the Document Library Creation dialog box appears, give your library a Name, Description, and also set the Document Template to None. In this example we are creating a library for all of the AP Documents.
    When you are done, click on the Create button to start the document library creation process.

How it works…

After it finishes you should have a new library for you to use. You can repeat the process for all of the other business areas that you want to manage documents for – in our example we just used the standard business areas from the Dynamics AX navigation menu.

Creating Dynamics AX Document Types that Link to the Document Libraries

Once you have created your document libraries, you can connect them to Dynamics AX with the new SharePoint option so that the users are able to attach documents from the client and then store them within SharePoint for everyone to access.

How to do it…

To create a file attachment type that links to SharePoint, follow these steps:

  1. From the Organization administration area page, select the Document types menu item from the Document management folder of the Setup group.
  2. When the Document types maintenance form is displayed, click on the New button in the menu bar to create a new entry.
  3. We will start by creating a link for all of our generic Accounts Payable documents by giving our new Document type a Name and Description. In the Group select File from the dropdown options and select SharePoint for the Location option.
  4. Now return back to your document libraries within SharePoint and copy the URL for the document library.
  5. Paste the URL into the Archive
    directory field.

    Note: Remove all of the extra parameters though so that you are just referencing the base folder location.
    Also, if you click on the folder browser to the right of the Archive directory field you can test the link to SharePoint.

How it works…

Now, if you attach a document, then you will see the option for your new document type.

It will allow you to attach any file that you have on your desktop.

And rather than showing you the thumbnail image, it will show you a reference link to your SharePoint document library.

After attaching the document, if you look within SharePoint, you will see your document is saved away for you.

You can repeat this process for all of your other document libraries that were created in the previous step.

Adding Columns to your Document Libraries for Better Indexing

One of the reasons why you want to start using SharePoint is so that you can take advantage of the indexing functionality to code and classify your documents. Now that you have people storing the documents away, it’s time to add some indexes to you document libraries.

How to do it…

To create new indexes for your document libraries, follow these steps:

  1. Open up your document libraries within SharePoint and select the Library ribbon bar. Then click on the Create Column button within the Manage Views group.
  2. When the Create Column form opens, set the Column Name to be the field that you want to index, select the type of the column, and also set the columns Description.
  3. Note: Sometimes it’s a good idea not to have spaces in the column name, later on when we add filters, it becomes a litter easier to manage this way.
  4. After you have finished defining the column, click on the OK button to add the column to your library.
  5. When you return back to your document library, there will be a new column on the form.
  6. Repeat the process for all of the columns that you want to use as index fields for the library.

    Note: All of the columns do not have to be used during the indexing process, so it’s OK to have variations of columns, like InvoiceNum, CreditNoteNum, etc.

How it works…

To edit the columns, select the options menu for the document, and choose the Edit Properties option.

This will allow you to update the fields that Dynamics AX did not populate initially.

Now when you look at the document within SharePoint, you will see the additional metadata that is associated with the document.

Embedding Document Libraries into Dynamics AX Forms

Now that we are able to index documents a little more effectively within SharePoint, we can go the extra step, and link SharePoint to our forms within Dynamics AX so that we are able to access them without even leaving the application. Doing this just requires a little bit of coding, but is well worth the effort.

Getting Started…

You can manipulate the information that is displayed by SharePoint, and also how it is displayed through the URL that you use.

If you filter any of the views, then you will notice that it uses two qualifiers – FilterFieldX
and FilterValueX to restrict the viewed records.

Also, if you add a IsDlg=1 qualifier, then all of the navigation areas are hidden giving you a clean list of filtered documents.

This is the perfect type of view to embed within Dynamics AX.

The other half of this step is to choose a form to add your document libraries to. In this case we will update the Vendors form.

How to do it…

To embed your SharePoint document libraries within Dynamics AX forms, follow these steps:

  1. Start the process by opening up AOT, and create a new project for this tweak.
  2. From the Forms area in the AOT, drag over the form that you want to add the documents to – in this case it’s the VendTable form.
  3. Expand out the form within the project, and navigate to the group that you want to add your document library view into.
  4. Right-mouse-click on the parent tab, and select the TabPane option from the New Control sub-menu.
  5. Reorder your tabs (ALT+UpArrow/DownArrow) so that they are in the sequence that you want and then give your new Tab Control a Name and Caption in the Properties section.
  6. Right-mouse-click on the new tab that you added for the document library and select the ActiveX control from the New Control sub menu.
  7. When the list of ActiveX controls are displayed, select the Microsoft Web Browser control, and click the OK button to add it.
  8. Rename your ActiveX control, and set the Width to Column width
    and Height to Column height.
  9. Now we need to have Dynamics AX update the URL that is navigated to when the form is opened. To do this, right-mouse-click on the parent Methods group for the form, and select the activate method from the Override methods submenu.
  10. Update the activate method by building the URL that will define the specific document index that you are wanting to show. You are able to now add conditional filters that pick up the record values, and filter based on the current record – in this case the vendor number.
  11. Once you have finished the update, save the project.

How it works…

Now when you open up the Vendor form, there will be a Documents tab that shows all of the documents that are associated with the current record.

If you select a record that does not have attached documents, then you will not see anything there.

How cool is that.

Creating Custom Views for the Document Libraries

Now that all of the heavy lifting has been done, you can now start tweaking the SharePoint libraries and the way that the information is displayed. Based on the form that you are in you may want to show only particular information. You can do that by creating new custom views.

How to do it…

To create a custom view for your document libraries, follow these steps:

  1. Open up your document libraries within SharePoint and select the Library ribbon bar. Then click on the Library Settings button within the Settings group.
  2. When the document library settings maintenance form is displayed, scroll down to the bottom of the page, and there will be a section for Views that will show you all of the different ways that the form could be displayed. In this case there is only one, but we can fix that by clicking on the Create View link.
  3. Select the Standard View option from the format templates that are displayed.
  4. Assign your new view a Name and select the fields that you want to be displayed in the view.
  5. Once you have made the changes, click on the OK button to save your new view.

How it works…

When you return to the document library you will be able to see the new format of the view.

You can then change the view within the URL of your project to make it the default view for the form.

Now when you see all of the documents within your Dynamics AX form you will see just the information that you need.

Using the SharePoint Designer to Edit Document Libraries

Although you can do everything that we have shown so far within SharePoint, you can also take advantage of the SharePoint Designer application to update your SharePoint document libraries. You don’t even have to search for the install kit, because it is embedded within your SharePoint site, just waiting to be downloaded and installed.

How to do it…

To access the SharePoint Designer to manipulate your SharePoint site, follow these steps:

  1. To use the SharePoint Designer to update your SharePoint site, just select the Edit in SharePoint Designer option from the Site Actions menu.

    Note: If you don’t have SharePoint Designer installed then it will ask you to install it, and download the kit directly from your SharePoint installation.

How it works…

When SharePoint Designer opens up, it will be connected to your current SharePoint site, showing you all of the libraries, etc. that you have been creating.

If you select the Lists and Libraries option from the navigation pad, you will be able to see all of the document libraries that you created in the previous steps.

Drilling into the libraries you will be able to also see all of the views etc. that you configured within SharePoint.

Creating New Content Types to Manage Document Types

When we set up our document libraries we deliberately created them so that all of the documents for a particular area are within the same library. This allows us to save multiple types of documents away within the library like Invoices, Credit Notes, Vendor Certificates etc. The way that we can identify the type of document is through the creation of Content Types.

How to do it…

To create custom Content Types to help make classification easier, follow these steps:

  1. Open up SharePoint Designer (although you can also do this within SharePoint itself) and select the Content Types from the navigation menu and click on the Content type menu button within the New group
    of the Content Types ribbon bar.
  2. When the Content Type creation form is displayed, give your Content Type a Name, and Description, select a parent content type, and also a group that you want to show the Content Type in.

    Note: For the first content type that you create, you may want to create a new Content Type Group so that it isn’t intermingled with all of the other content types.

  3. When you have finished creating your Content Type click on the OK button to add it to SharePoint.
  4. When you return to your SharePoint Designer workspace you will be able to see your new Content Type.
  5. Repeat the process for all of the other types of documents that you want to file away within SharePoint.
  6. Now we need to enable Content Types within our document libraries, and then assign them. To do this, open up your Document Library within SharePoint Designer, and within the Settings group, check the Allow management of content types check box.
  7. Then click on the Add button to the right of the Content Types group to open up the Content Type Picker. Find the new Content Types that you just created, and click on the OK button to assign them to your Document Library.
  8. Now the Content Type will show up as a valid option for the document library.
  9. Repeat the process for all of the other content types that you created.

How it works…

Now when you edit the properties for your documents, there will be a new indexing option for your documents that allows you to define the type of document that you are looking at.

Specifying Document Columns by Content Types to Simplify Indexing

There is an additional benefit that you get from using Content Types within SharePoint, which is the ability to specify what columns are applicable to different Content Types at the time of indexing. For example, you probably don’t want to specify a Invoice Number when indexing a Vendor Insurance Certificate, but would definitely would want to when indexing an Invoice and even a Credit Note.

How to do it…

To modify your Content Types within your Document Libraries to only require certain columns to be indexed, follow these steps:

  1. From within your SharePoint Document Library (or from within SharePoint Designer) click on the Library Settings button within the Settings group of the Library ribbon bar.
  2. Within the Library Settings you will be able to see all of your Content Types that have been assigned. Select any of them to edit their options.
  3. When you first create the Content Types then they will have no columns assigned to them. Click on the Add from existing site or list columns link to assign the valid columns to your Content Type.
  4. When the Add columns to Content Type form is displayed, you will be able to see all of the available columns within the Document Library.
  5. Just select the ones that you want to use for the indexing, and then click the Add button. Once you have selected all the ones that you want to use, click on the OK button to save your changes.

How it works…

Now you will have indexing by Content Type.

Showing the Content Type in the Document View

Now that we are classifying documents by Content Type we might as well show it in the views so that we are able to differentiate different document types.

How to do it…

To add the Content Type field to our Document View, follow these steps:

  1. From within your SharePoint Document Library (or from within SharePoint Designer) click on the Modify View button within the Manage Views group of the Library ribbon bar.
  2. Now that the Content Type is enabled on our Document Library, it will show up on the list of valid columns. To add it to our view, just check the Display checkbox, and possibly change the order of the field so that it is first in the table.
  3. When you’re done, click on the OK button to update the view.

How it works…

Now the Content Type is shown in the document library view.

And also will show up when we browse to the documents within Dynamics AX.

How neat is that.

Grouping Records in the Document View by Key Columns

One final tweak that we will show within SharePoint is the ability to group columns within our document library views so that common information is shown together. These groupings can be different by view, and just make it a little easier to find information if we don’t’ initially filter the data.

How to do it…

To group records within your document library view, follow these steps:

  1. From within your SharePoint Document Library (or from within SharePoint Designer) click on the Modify View button within the Manage Views group of the Library ribbon bar.
  2. Scroll down your view definition until you see the Group By configuration options. Here you will be able to change the Group By fields.

How it works…

Now when you look at your documents, they will be classified by key fields.

Summary

In this walkthrough we have shown how you can:

  • Create a simple document management repository within SharePoint
  • Link the document attachments function within Dynamics AX to SharePoint to make the acquisition of the documents easier
  • Index your documents more effectively by defining custom columns
  • Embed SharePoint back into Dynamics AX and also
  • Tweak your views within SharePoint to make it easier to find and view documents

This is really just a starting point, and once you have mastered the basics, you can start investigating:

  • Assigning workflows to documents for approvals and updates
  • Enabling version control for your documents
  • Acquire documents into SharePoint through scanning technologies
  • Link the index column fields to Dynamics AX for validation of key information
  • And much more.

SharePoint is a great document management tool, and can usually handle all of your document indexing needs. Especially now that it is connected with Dynamics AX natively.

Tool to analyse and then upgrade your old SharePoint VBA Web Parts to Apps!!

office365[1]

Welcome to Microsoft VBA and SharePoint Code Analyzer

 Now is the time to still use that old VBA code you have!!

This is an online tool where you can upload your file and generate reports collecting detailed statistics about the user’s VBA and SharePoint source code files, providing useful information about migrating VBA and SharePoint applications.

To analyze your files please follow up this simple 4 steps:

How To : A library to create .mht files (available at request)

There are a number of ways to do this, including hosting Word or Excel on the Web Server and dealing with COM Interop issues, or purchasing third – party MIME encoding libraries, some of which sell for $250.00 or more. But, there is no native .NET solution. So, being the curious soul that I am, I decided to investigate a bit and see what I could come up with. Internet Explorer offers a File / Save As option to save a web page as “Web Archive, single file (*.mht)”.

Image

What this does is create an RFC – compliant Multipart MIME Message. Resources such as images are serialized to their Base64 inline encoding representations and each resource is demarcated with the standard multipart MIME header – breaks. Internet Explorer, Word, Excel and most newsreader programs all understand this format. The format, if saved with the file extension “.eml”, will come up as a web page inside Outlook Express; if saved with “.mht”, it will come up in Internet Explorer when the file is double-clicked out of Windows Explorer, and — what many do not know — if saved with a “*.doc” extension, it will load in MS Word, each with all the images intact, and in the case of the EML and MHT formats, with all of the hyperlinks fully-functioning. The primary advantage of the format is, of course, that all the resources can be consolidated into a single file,. making distribution and archiving much easier — including database storage in an NVarchar or NText type field.

 

System.Web.Mail, which .NET provides as a convenient wrapper around the CDO for Windows COM library, offers only a subset of the functionality exposed by the CDO library, and multipart MIME encoding is not a part of that functionality. However, through the wonders of COM Interop, we can create our own COM reference to CDO in the Visual Studio IDE, allowing it to generate a Runtime Callable Wrapper, and help ourselves to the entire rich set of functionality of CDO as we see fit.

 

One method in the CDO library that immediately came to my notice was the CreateMHTMLBody method. That’s MHTMLBody, meaning “Multipurpose Internet Mail Extension HTML (MHTML) Body”. Well!– when I saw that, my eyes lit up like the LED’s on a 32 – way Unisys box! This is a method on the CDO Message class; the method accepts a URI to the requested resource, along with some enumerations, and creates a MultiPart MIME – encoded email message out of the requested URI responses — including images, css and script — in one fell swoop.

 

“Ah”, you say, “How convenient”! Yes, and not only that, but we also get a free “multipart COM Interop Baggage” reference to the ADODB.Stream object – and by simply calling the GetStream method on the Message Class, and then using the Stream’s SaveToFile method, we can grab any resource including images, javascript, css and everything else (except video) and save it to a single MHT Web Archive file just as if we chose the “Save As” option out of Internet Explorer.

 

If we choose not to save the file, but instead want to get back the stream contents, no problem. We just call Stream.ReadText(Stream.Size) and it returns a string containing the entire MHT encoded content. At that point we can do whatever we want with it – set a content – header and Response .Write the content to the browser, for instance — or whatever.

 

For example, when we get back our “MHT” string, we can write the following code:

Response.ContentType=”application/msword”;
Response.AddHeader( “Content-Disposition”, “attachment;filename=NAME.doc”);
Response.Write(myDataString);

 

— and the browser will dutifully offer to save the file as a Word Document. It will still be Multipart MIME encoded, but the .doc extension on the filename allows Word to load it, and Word is smart enough to be able to parse and render the file very nicely. “Ah”, you are saying, “this is nice, and so is the price!”. Yup!

And, if you are serving this MIME-encoded file from out of your database, for example, and you would like it to be able to be displayed in the browser, just change the “NAME.doc” to “NAME.MHT”, and don’t set a content-type header. Internet Explorer will prompt the user to either save or open the file. If they choose “open”, it will be saved to the IE Temporary files and open up in the browser just as if they had loaded it from their local file system.

 

So, to answer a couple of questions that came up recently, yes — you can use this method to MHTML – encode any web page – even one that is dynamically generated as with a report — provided it has a URL, and save the MIME-encoded content as a string in either an NVarchar or NText column in your database. You can then bring this string back out and send it to the browser, images,css, javascript and all.

Now here is the code for a small, very basic “Converter” class I’ve written to take advantage of the two scenarios specified above. Bear in mind, there is much more available in CDO, but I leave this wondrous trail of ecstatic discovery to your whims of fancy:

using System;
using System.Web;
using CDO;
using ADODB;
using System.Text;
namespace PAB.Web.Utils
{
 public class MIMEConverter
 {
  //private ctor as our methods are all static here
  private MIMEConverter()
  {
   
  }   
  public static bool SaveWebPageToMHTFile( string url, string filePath)
  {
   bool result=false;
   CDO.Message  msg = new CDO.MessageClass(); 
   ADODB.Stream  stm=null ;
   try
   {
    msg.MimeFormatted =true;   
    msg.CreateMHTMLBody(url,CDO.CdoMHTMLFlags.cdoSuppressNone, "" ,"" );
stm = msg.GetStream(); stm.SaveToFile(filePath,ADODB.SaveOptionsEnum.adSaveCreateOverWrite); msg=null; stm.Close(); result=true; } catch {throw;} finally { //cleanup here } return result; } public static string ConvertWebPageToMHTString( string url ) { string data = String.Empty; CDO.Message msg = new CDO.MessageClass(); ADODB.Stream stm=null; try { msg.MimeFormatted =true; msg.CreateMHTMLBody(url,CDO.CdoMHTMLFlags.cdoSuppressNone,
"", "" );
stm = msg.GetStream(); data= stm.ReadText(stm.Size); } catch { throw; } finally { //cleanup here } return data; } } }

 

NOTE: When using this type of COM Interop from an ASP.NET web page, it is important to remember that you must set the AspCompat=”true” directive in the Page declaration or you will be very disappointed at the results! This forces the ASP.NET page to run in STA threading model which permits “classic ASP” style COM calls. There is, of course, a significant performance penalty incurred, but realistically, this type of operation would only be performed upon user request and not on every page request.

<

p align=”left”>The downloadable zip file below contains the entire class library and a web solution that will exercise both methods when you fill in a valid URI with protocol, and a valid file path and filename for saving on the server. Unzip this to a folder that you have named “ConvertToMHT” and then mark the folder as an IIS Application so that your request such as “http://localhost/ConvertToMHT/WebForm1.aspx&#8221; will function correctly. You can then load the Solution file and it should work “out of the box”. And, don’t forget – if you have an ASP.NET web application that wants to write a file to the file system on the server, it must be running under an identity that has been granted this permission.

How To : Make use of Application Insights (with a great VS extension available freely)

You can now add Application Insights telemetry right from Visual Studio to new or existing projects in 2 clicks or less.

This release of Application Insights Tools for Visual Studio is in PREVIEW; see Known Issues below.

Get Started

To get started with a new project, simply create a Web, Windows Store 8.1, or Windows Phone 8.0 project. In the New Project dialog, make sure that Add Application Insights to Project is checked.

To get started with an existing project, right-click on a Web, Windows Store 8.1, or Windows Phone 8.0 project in Solution Explorer and choose Add Application Insights Telemetry to Project.

That’s it! Then run your Web application locally (or deploy your application), and after 10-15 minutes, telemetry data will automatically start appearing in the Application Insights Portal in the Usage tab.

Additional project types are supported with partial automation (see Known Issues below)

Q & A

Q: I don’t see the Add Application Insights Telemetry to Project command.

A: The type of app you’re creating is not supported yet. Instead, add your application at the Application Insights portal.

Q: Oops. I closed the Application Insights browser. How do I get it back?

A: In Solution Explorer, in the context menu of the project, choose Open Application Insights Portal…

Q: What other telemetry can I log from my app?

A: You can log events and metrics. Take a look at Improve your application from live usage data

Known Issues

This release of Application Insights Tools for Visual Studio is in PREVIEW. There are a number of known issues, including:

  • If you are upgrading from version 0.6.56.3 of the Application Insights Telemetry SDK for Services using the Application Insights Tools for Visual Studio, you will need to manually copy any custom settings from Monitoring.CollectionPlan.config to ApplicationInsights.config file.
  • For Windows Store 8.1 C++ projects instrumented with Application Insights, updates for the “Application Insights Telemetry SDK for Windows Store Apps” from nuget.org do not show up in the “Manage NuGet Packages” dialog. You can install an updated nuget package from the “Online” section of the “Manage NuGet Packages” dialog. Or, you can execute this command in the Package Manager Console: update-package Microsoft.ApplicationInsights.Telemetry.WindowsStore

DRY Architecture, Layered Architecture, Domain Driven Design and a Framework to build great Single Web Pages – BiolerPlate Part 1

DRY – Don’t Repeat Yourself! is one of the main ideas of a good developer while developing a software. We’re trying to implement it from simple methods to classes and modules. What about developing a new web based application? We, software developers, have similar needs when developing enterprise web applications.

Enterprise web applications need login pages, user/role management infrastructure, user/application setting management, localization and so on. Also, a high quality and large scale software implements best practices such as Layered Architecture, Domain Driven Design (DDD), Dependency Injection (DI). Also, we use tools for Object-Releational Mapping (ORM), Database Migrations, Logging… etc. When it comes to the User Interface (UI), it’s not much different.

Starting a new enterprise web application is a hard work. Since all applications need some common tasks, we’re repeating ourselves. Many companies are developing their own Application Frameworks or Libraries for such common tasks to do not re-develop same things. Others are copying some parts of existing applications and preparing a start point for their new application. First approach is pretty good if your company is big enough and has time to develop such a framework.

As a software architect, I also developed such a framework im my company. But, there is some point it feels me bad: Many company repeats same tasks. What if we can share more, repeat less? What if DRY principle is implemented universally instead of per project or per company? It sounds utopian, but I think there may be a starting point for that!

What is ASP.NET Boilerplate?

http://www.aspnetboilerplate.com/

ASP.NET Boilerplate [1] is a starting point for new modern web applications using best practices and most popular tools. It’s aimed to be a solid model, a general-purpose application framework and a project template. What it does?

  • Server side
    • Based on latest ASP.NET MVC and Web API.
    • Implements Domain Driven Design (Entities, Repositories, Domain Services, Application Services, DTOs, Unif Of Work… and so on)
    • Implements Layered Architecture (Domain, Application, Presentation and Infrastructure Layers).
    • Provides an infrastructure to develop reusable and composable modules for large projects.
    • Uses most popular frameworks/libraries as (probably) you’re already using.
    • Provides an infrastructure and make it easy to use Dependency Injection (uses Castle Windsor as DI container).
    • Provides a strict model and base classes to use Object-Releational Mapping easily (uses NHibernate, can work with many DBMS).
    • Implements database migrations (uses FluentMigrator).
    • Includes a simple and flexible localization system.
    • Includes an EventBus for server-side global domain events.
    • Manages exception handling and validation.
    • Creates dynamic Web API layer for application services.
    • Provides base and helper classes to implement some common tasks.
    • Uses convention over configuration principle.
  • Client side
    • Provides two project templates. One for Single-Page Applications using Durandaljs, other one is a Multi-Page Application. Both templates uses Twitter Bootstrap.
    • Most used libraries are included by default: Knockout.js, Require.js, jQuery and some useful plug-ins.
    • Creates dynamic javascript proxies to call application services (using dynamic Web API layer) easily.
    • Includes unique APIs for some sommon tasks: showing alerts & notifications, blocking UI, making AJAX requests.

Beside these common infrastructure, the “Core Module” is being developed. It will provide a role and permission based authorization system (implementing ASP.NET Identity Framework), a setting systems and so on.

What ASP.NET Boilerplate is not?

ASP.NET Boilerplate provides an application development model with best practices. It has base classes, interfaces and tools that makes easy to build maintainable large-scale applications. But..

  • It’s not one of RAD (Rapid Application Development) tools those provide infrastructure for building applications without coding. Instead, it provides an infrastructure to code in best practices.
  • It’s not a code generation tool. While it has several features those build dynamic code in run-time, it does not generate codes.
  • It’s not a all-in-one framework. Instead, it uses well known tools/libraries for specific tasks (like NHibernate for O/RM, Log4Net for logging, Castle Windsor as DI container).

Getting started

In this article, I’ll show how to deleveop a Single-Page and Responsive Web Application using ASP.NET Boilerplate (I’ll call it as ABP from now). This sample application is named as “Simple Task System” and it consists of two pages: one for list of tasks, other one is to add new tasks. A Task can be related to a person, can be completed. The application is localized in two languages. Screenshot of Task List in the application is shown below:

A screenshot of 'Simple Task System'

Creating empty web application from template

ABP provides two templates to start a new project (Even if you can manually create your project and get ABP packages from nuget, template way is much more easy). Go to www.aspnetboilerplate.com/Templates to create your application from one of twotemplates (one for SPA (Single-Page Application), one for MPA (classic, Multi-Page Application) projects):

Creating template from ABP web site

I named my project as SimpleTaskSystem and created a SPA project. It downloaded project as a zip file. When I open the zip file, I see a solution is ready that contains assemblies (projects) for each layer of Domain Driven Design:

Project files

Created project’s runtime is .NET Framework 4.5.1, I advice to open with Visual Studio 2013. The only prerequise to be able to run the project is to create a database. SPA template assumes that you’re using SQL Server 2008 or later. But you can change it easily to another DBMS.

See the connection string in web.config file of the web project:

<add name="MainDb" connectionString="Server=localhost; Database=SimpleTaskSystemDb; Trusted_Connection=True;" />

You can change connection string here. I don’t change the database name, so I’m creating an empty database, named SimpleTaskSystemDb, in SQL Server:

Empty database

That’s it, your project is ready to run! Open it in VS2013 and press F5:

First run

Template consists of two pages: One for Home page, other is About page. It’s localized in English and Turkish. And it’s Single-Page Application! Try to navigate between pages, you’ll see that only the contents are changing, navigation menu is fixed, all scripts and styles are loaded only once. And it’s responsive. Try to change size of the browser.

Now, I’ll show how to change the application to a Simple Task System application layer by layer in the coming part 2

Great Agile Develipment Tool – Agile Planner

Great Agile Development project – /http://agileplanner.codeplex.com/

Project Description

This project is to develop an interation planning tool for agile project management.

What’s new?

Release: 1.0.0, run in Visual Studio integrated mode. See how to use for details.

What is Agile Planner?

This tool is for agile project teams, who currently are using sticky notes on the wall. With this tool stories, backlog and iterations are managed in a graphic designer, saved as files within visual studio projects and can be exported to images, reports and etc.

Main features are

  • Stories can be drag and dropped between backlog and iterations
  • Iteration’s capacity calculated automatically base on stories within it
  • Stories are rendered base on customizable status or priority color schema
  • Diagram can be exported to PNG image for printing, documentation and sharing

Here are examples.

Stories rendered based on status
agile-stories-on-status-s.png

Stories rendered base on priority
agile-stories-on-priority-s.png

 

How to use Agile Planner

1. Install
To install Agile Planner,

  • Download the runtime binary zip file from the latest releases
  • Extract all files from the runtime binary zip file
  • Run the windows installer AgilePlanner.msi (requires elevated command prompt under Vista and administrator on XP/2003).

2. Get Started
To start use Agile Planner in Visual Studio 2008 project

  • Start Visual Studio 2008, create new project or load existing project
  • Right click the project name and select menu “Add | New Item …”

add-new-item.PNG

  • Select AgilePlanner
  • Dismiss the security warning if it shows up

You will be presented a designing environment like below.

designer.PNG

  1. Graphical designer for iterations and stories
  2. Toolbox for iterations and stories
  3. Treeview Explorer for iterations and stories
  4. Property window for iterations and stories

3. Plan your project’s iterations

  • Create iterations by dragging iteration tool from toolbox to the graphical designer
  • Create Stories by dragging story tool from toolbox to the backlog and iterations
  • Edit stories’ properties such as name, capacity, priority and status the property window

add-stories.PNG

Notice: the capacity of iteration updated automatically after dragged stories between iterations and after updated the stories’ capacity property, so that you can balance the work load between iterations.

4. Render the diagram
The stories can be colored based on either their status or priority. To switch between these two options, right click the diagram and select menu “Color on Status” or “Color on Priority”. The color schema are customizable as the property of the project.

render-in-priority.PNG

5. Export
The rendered diagram can be exported to png file by right clicking on the diagram and select menu “Export to image”.

exported.png

How to use it?

See how to use

How To : Use SharePoint Dashboards & MSRS Reports for your Agile Development Life Cycle

The Problem We Solve

Agile BI is not a term many would associate with MSRS Reports and SharePoint Dashboards. While many organizations first turn to the Microsoft BI stack because of its familiarity, stitching together Microsoft’s patchwork of SharePoint, SQL Server, SSAS, MSRS, and Office creates administrative headaches and requires considerable time spent integrating and writing custom code.

This Showcase outlines the ease of accomplishing three of the most fundamental BI tasks with LogiXML technology as compared to MSRS and SharePoint:

  • Building a dashboard with multiple data sources
  • Creating interactive reports that reduce the load on IT by providing users self-service
  • Integrating disparate data sources

Read below to learn how an agile BI methodology can make your life much easier when it comes to dashboards and reports. Don’t feel like reading?

Building a Dashboard with LogiXML vs. MSRS + SharePoint

Microsoft’s only solution for dashboards is to either write your own code from scratch, manipulate SharePoint to serve a purpose for which it wasn’t initially designed, or look to third party apps. Below are some of the limitations to Microsoft’s approach to dashboards:

  • Limited Pre-Built Elements: Microsoft components come with only limited libraries of pre-built elements. In addition to actual development work, you will need to come up with an idea of how everything will work together. This necessitates becoming familiar with best practices in dashboards and reporting.
  • Sophisticated Development Expertise Required: While Microsoft components provide basic capabilities, anything more sophisticated is development resource-intensive and requires you to take on design, execution, and delivery. Any complex report visualizations and logic, such as interactive filters, must be written in code by the developer.
  • Limited Charts and Visualizations: Microsoft has a smaller sub-set of charts and visualization tools. If you want access to the complete library of .NET-capable charts, you’ll still need to OEM another charting solution at additional expense.
  • Lack of Integrated Workflow: Microsoft does not include workflow features sets out of the box in their BI offering.

LogiXML technology is centered on Logi Studio: an elemental, agile BI design environment which lets you simply choose from hundreds of powerful and configurable pre-built elements. Logi’s pre-built elements equip developers with tools to speed development, as well as the processes and logic required to build and manage BI projects. Below is a screen shot of the Logi Studio while building new dashboards.

agile-bi.jpg

Start a free LogiXML trial now.

Logi developers can easily create static or user-customizable dashboards using the Dashboard element. A dashboard is a collection of panels containing Logi reports, which in turn contain table, charts, images, etc. At runtime, the user can customize the dashboard by rearranging these panels on the browser page, by showing or hiding them, and even by changing their contents using adjustable reporting criteria. The data displayed within the panels can be configured, as in any Logi report, to link to other reports, providing drill-down functionality.

 

logi2.jpg

The dashboard displayed above has tabs and user customization enabled. The Dashboard element provides customization features, such as drag-and-drop panel positioning, support for built-in parameters the user can access to adjust the panel’s data contents, and a panel selection list that determines which panels will be displayed. AJAX techniques are utilized for web server interactions, allowing selective updates of portions of the dashboard. Dashboard customizations can be saved on an individual-user basis to create a highly personalized view of the data.

The Dashboard Wizard

The ‘Create a Dashboard’ wizard assists developers in creating dashboards by populating the report definition with the necessary dashboard-related elements. You can easily point to any data source by selecting from a variety of DataLayer types, including SQL, StoredProcedures, Web Services, Files, and more. A simple to use drag and drop SQL Query builder is also integrated, to offer a guided approach to constructing queries when connecting to your database.

logi3.jpg

Using the Dashboard Element

The Dashboard element is used to create the top level structure for all of your interactive panels within the final output. Under your dashboards, you can optionally add any number of Dashboard Panels, Panel Parameters for dynamic filtering, and even automatic refresh features with AJAX-based refresh timers.

logi4.jpg

Changing Appearance Using Themes and Style Sheets

The appearance of a dashboard can be changed easily by assigning a theme to your report. In addition, or as an alternative, you can change dashboard appearance using style. The Dashboard element has its own Cascading Style Sheet (CSS) file containing predefined classes that affect the display colors, font sizes, button labels, and spacing seen when the dashboard is displayed. You can override these classes by adding classes with the same name to your own style sheet file.

See us build a BI app with 3 data sources in under 10 minutes.

Ad Hoc Reporting Creation with LogiXML: Analysis Grid

The Analysis Grid is a managed reporting feature giving end users virtual ad hoc capability. It is an easy to use tool that allows business users to analyze and manipulate data and outputs in multiple and powerful ways.

logi5.jpg

Start a free LogiXML trial now.

Create an Analysis Grid by using the “Create Analysis Grid” wizard, or by simply adding the AnalysisGrid element into your definition file. Like the dashboard, data for the Analysis Grid can be accessed from any of the data options, including SQL databases, web sources, or files. You also have the option to launch the interactive query builder wizard for easy, drag-drop, SQL query creation.

The Analysis Grid is composed of three main parts: the data grid itself, i.e. a table of data to be analyzed; various action buttons at the top, allowing the user to perform actions such as create new columns with custom calculations, sort columns, add charts, and perform aggregations; and the ability to export the grid to Excel, CSV, or PDF format.

The Analysis Grid makes it easy to perform what-if analyses through features like filtering. The Grid also makes data-presentation impactful through visualization features including data driven color formatting, inline gauges, and custom formula creation.

Ad Hoc Reporting Creation with Microsoft

While simple ad hoc capabilities, such as enabling the selection of parameters like date ranges, can be accomplished quickly and easily with Microsoft, more sophisticated ad hoc analysis is challenging due to the following shortcomings.

Platform Integration Problems

Microsoft BI strategy is not unified and is strongly tied to SQL Server. To obtain analysis capabilities, you must build cubes through to the Analysis Service, which is a separate product with its own different security architecture. Next, you will need to build reports that talk to SQL server, also using separate products.

Dashboards require a SharePoint portal which is, again, a separate product with separate requirements and licensing. If you don’t use this, you must completely code your dashboards from scratch. Unfortunately, Microsoft Reporting Services doesn’t play well with Analysis Services or SharePoint since these were built on different technologies.

SharePoint itself offers an out of the box portal and dashboard solution but unfortunately with a number of significant shortcomings. SharePoint was designed as a document management and collaboration tool as opposed to an interactive BI dashboard solution. Therefore, in order to have a dashboard solution optimized for BI, reporting, and interactivity you are faced with two options:

  • Build it yourself using .NET and a combination of third party components
  • Buy a separate third party product

Many IT professionals find these to be rather unappealing options, since they require evaluating a new product or components, and/or a lot of work to build and make sure it integrates with the rest of the Microsoft stack.

Additionally, while SQL Server and other products support different types of security architectures, Analysis Services only has support for using integrated Windows NT security models to access cubes and therefore creates integration challenges.

Moreover, for client/ad hoc tools, you need Report Writer, a desktop product, or Excel – another desktop application. In addition to requiring separate licenses, these products don’t even talk to one another in the same ways, as they were built by different companies and subsequently acquired by Microsoft.

Each product requires a separate and often disconnected development environment with different design and administration features. Therefore to manage Microsoft BI, you must have all of these development environments available and know how to use them all.

Integration of Various Data Sources: LogiXML vs. Microsoft

LogiXML is data neutral, allowing you to easily connect to all of your organization’s data spread across multiple applications and databases. You can connect with any data source or data model and even combine data sources such as current data accessed through a web service with past data in spreadsheets.

Integration of Various Data Sources with Microsoft

Working with Microsoft components for BI means you will be faced with the challenge of limited support for non-Microsoft based databases and outside data sources. The Microsoft BI stack is centered on SQL Server databases and therefore the data source is optimized to work with SQL Server. Unfortunately, if you need outside content it can be very difficult to integrate.

Finally, Microsoft BI tools are designed with the total Microsoft experience in mind and are therefore optimized for Internet Explorer. While other browsers and devices might be useable, the experience isn’t optimized and may potentially lack in features or visualize differently.

 

Free & Licensed Windows 8, Azure, Office 365, SharePoint On-Premise and Online Tools, Web Parts, Apps available.
For more detail visit https://sharepointsamurai.wordpress.com or contact me at tomas.floyd@outlook.com

Introduction to the Unified Logging Service and Creating a Javascript Logging System

Microsoft SharePoint Foundation exposes a rich logging mechanism known as the Unified Logging Service (ULS) that enables developers to write useful information helping them to identify and troubleshoot issues during the application lifecycle. The ULS writes SharePoint Foundation events to the SharePoint Trace Log, and stores them in the file system, typically inside the SharePoint root folder in files named \14\LOGS\SERVERYYYmmDDID.log.

ULS exposes a rich managed object model enabling developers to specify their own configurations such as categories and severity while writing exceptions or trace message to the ULS logs. You can find more details on the managed API in the article Writing to the Trace Log from Custom Code.

With the evolution of a rich client object model in SharePoint 2010 that enables developers to build complex client applications, it is very important to write useful information that is not visible in the user interface but is recorded on the server so it can be monitored by administrators and developers.

To address these scenarios for applications running in thin-client browsers, SharePoint Foundation provides a web service named SharePoint Diagnostics (diagnostics.asmx). This web service enables a client application to submit diagnostic reports directly to the ULS logs.

This article focuses on how you can leverage the SharePoint Diagnostics web service to write trace messages from a custom JavaScript application into the ULS logs.

The following points are discussed:

  • Overview of the SendClientScriptErrorReport web method
  • Creating a simple JavaScript application to log trace messages by using SharePoint Diagnostics web service
  • Setting up the required configurations for enabling logging via the Diagnostics web service
  • Using the application
  • Using the ULS logging script with sandboxed solutions
The Diagnostics web service exposes a single method named SendClientScriptErrorReport that enables client applications to report errors to the ULS service. The following table summarizes the parameter list required by the SendClientScriptErrorReport method.

Parameter Name Description Value Examples
Message A string containing the message to display to the client The value of the displaypage property is null or undefined; not a function object.
File The URL file name associated with the current error customscript.js
Line A string containing the line of code from which the error is being generated 9
Client A string containing the client name that is experiencing the error <client><browser name=’Internet Explorer’ version=’9.0′></browser><language> en-us </language></client>
Stack A string containing the call-stack information from the generated error <stack><function depth=’0′ signature=’ myFunction() ‘>function myFunction() { ‘displaypage ();}</function></stack>
Team A string containing a team or product name Custom SharePoint Application
originalFile The physical file name associated with the current error customscript.js

In the table, notice that the example values for Client and Stack depict a XML fragments, not single lines of text. This information is stated in the protocol specification documented in 3.1.4.1.2.1 SendClientScriptErrorReport. Even though the protocol specification for these parameters requires a valid XML fragment, the web-service call to this method still succeeds even if the values supplied for these parameters do not follow this schema, despite the fact that creating the client and stack in this way would add more information to the trace.

The parameter list in the table shows that, unlike the managed API, the SendClientScriptErrorReport web method does not provide any option to specify the category or severity of the message being logged. Also looking at the method name and description, it appears that the exception logged should specify the severity level as Error. However, any message logged through the SharePoint Diagnostics web service is always displayed under the category Unified Logging Service and has a trace log severity level set to Verbose.

Later in this article, you will see the steps required to view the traces written through the SharePoint Diagnostics web service.

In this section, you create a JavaScript application that uses the Diagnostics web service to report errors to the ULS. The application contains a JavaScript file named ULSLogScript.js that contains the necessary functions to communicate and log traces to the Diagnostics web service. These functions are then called directly from any consumer script.

Note
This is a relatively simple application with just one file, so you are not creating a formal SharePoint solution; instead, you save the files directly to the Layouts directory in the SharePoint hive structure.

To create a JavaScript library containing the ULS logging logic

  1. Start Microsoft Visual Studio 2010.
  2. From the File menu, create a new JScript file and save it in the following path: <SharePoint Installation Folder>\14\TEMPLATE\LAYOUTS\LoggingSample\ULSLogScript.js.

    For example, C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS\LoggingSample\ULSLogScript.js.

    Note
    You need to create a new directory named LoggingSample in the Layouts folder.
  3. Because you are using the JQuery library in the application, download the jquery-1.6.4.min.js file from the JQuery portal and add it to the LoggingSample folder created previously.
  4. Type or paste the following code into the ULSLogScript.js file.
    // Creates a custom ulslog object 
    // with the required properties.
    function ulsObject() {
        this.message = null;
        this.file = null;
        this.line = null;
        this.client = null;
        this.stack = null;
        this.team = null;
        this.originalFile = null;
    }
    

    The ulsObject function returns a new instance of a custom object with properties mapped to the parameters required by the SendClientScriptErrorReport method. This object is used throughout the script for performing various operations.

  5. Define the methods that populate the property values specified in the ulsObject method. Begin by defining the function that retrieves the client property. Following the ulsObject method, type or paste the following code.
    // Detecting the browser to create the client information
    // in the required format.
    function getClientInfo() {
        var browserName = '';
    
        if (jQuery.browser.msie)
            browserName = "Internet Explorer";
        else if (jQuery.browser.mozilla)
            browserName = "Firefox";
        else if (jQuery.browser.safari)
            browserName = "Safari";
        else if (jQuery.browser.opera)
            browserName = "Opera";
        else
            browserName = "Unknown";
    
        var browserVersion = jQuery.browser.version;
        var browserLanguage = navigator.language;
        if (browserLanguage == undefined) {
            browserLanguage = navigator.userLanguage;
        }
    
        var client = "<client><browser name='{0}' version='{1}'></browser><language>{2}</language></client>";
        client = String.format(client, browserName, browserVersion, browserLanguage);
     
        return client;
    }
    
    // Utility function to assist string formatting.
    String.format = function () {
        var s = arguments[0];
        for (var i = 0; i < arguments.length - 1; i++) {
            var reg = new RegExp("\\{" + i + "\\}", "gm");
            s = s.replace(reg, arguments[i + 1]);
        }
    
        return s;
    }
    

    The getClientInfo function uses the JQuery library to detect the current browser properties, such as the name and version, and then creates a XML fragment (as discussed previously) describing the browser details where the application is currently running. Additionally, a utility function named String.format assists string formatting through the code.

  6. Next, you need a function to create the call stack for the exception raised in the script. Add the following functions to the ULSLogScript.js code.
    // Creates the callstack in the required format 
    // using the caller function definition.
    function getCallStack(functionDef, depth) {
        if (functionDef != null) {
            var signature = '';
            functionDef = functionDef.toString();
            signature = functionDef.substring(0, functionDef.indexOf("{"));
            if (signature.indexOf("function") == 0) {
                signature = signature.substring(8);
            }
    
            if (depth == 0) {
                var stack = "<stack><function depth='0' signature='{0}'>{1}</function></stack>";
                stack = String.format(stack, signature, functionDef);
            }
            else {
                var stack = "<stack><function depth='1' signature='{0}'></function></stack>";
                stack = String.format(stack, signature);
            }
    
            return stack;
        }
    
        return "";
    }
    

    The getCallStack function receives the function definition where the exception occurred and a depth as a parameter. The depth parameter is used by the function to decide if only the caller function signature is required or the complete function definition is to be included. Based on the caller function definition, the getCallStack function extracts the required information such as the signature, body, and creates an XML fragment as described in the protocol specification.

  7. Next, create a function that creates a SOAP packet in the format expected by the Diagnostics web service SendClientScriptErrorReport method. Type or paste the following functions in the ULSLogScript.js file.
    // Creates the SOAP packet required by SendClientScriptErrorReport
    // web method.
    function generateErrorPacket(ulsObj) {
        var soapPacket = "<?xml version=\"1.0\" encoding=\"utf-8\"?>" +
                            "<soap:Envelope xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" " +
                                           "xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" "+
                                           "xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">" +
                              "<soap:Body>" +
                                "<SendClientScriptErrorReport " +
                                  "xmlns=\"http://schemas.microsoft.com/sharepoint/diagnostics/\">" +
                                  "<message>{0}</message>" +
                                  "<file>{1}</file>" +
                                  "<line>{2}</line>" +
                                  "<stack>{3}</stack>" +
                                  "<client>{4}</client>" +
                                  "<team>{5}</team>" +
                                  "<originalFile>{6}</originalFile>" +
                                "</SendClientScriptErrorReport>" +
                              "</soap:Body>" +
                            "</soap:Envelope>";
     
        soapPacket = String.format(soapPacket, encodeXmlString(ulsObj.message), encodeXmlString(ulsObj.file), 
                     ulsObj.line, encodeXmlString(ulsObj.stack), encodeXmlString(ulsObj.client), 
                     encodeXmlString(ulsObj.team), encodeXmlString(ulsObj.originalFile));
     
        return soapPacket;
    }
    
    // Utility function to encode special characters in XML.
    function encodeXmlString(txt) {
        txt = String(txt);
        txt = jQuery.trim(txt);
        txt = txt.replace(/&/g, "&amp;");
        txt = txt.replace(/</g, "&lt;");
        txt = txt.replace(/>/g, "&gt;");
        txt = txt.replace(/'/g, "&apos;");
        txt = txt.replace(/"/g, "&quot;");
     
        return txt;
    }
    

    The generateErrorPacket function receives an instance of the ulsObj object and returns the SOAP packet for the SendClientScriptErrorReport function as a string in the expected format. Because the values for the some parameters are expected as an XML fragment, the encodeXmlString function is used to encode the special characters.

  8. When the SOAP packet has been defined, you need a function to issue an asynchronous request to the Diagnostics web service. Add the code below to the ULSLogScript.js file.
    // Function to form the Diagnostics service URL.
    function getWebSvcUrl() {
        var serverurl = location.href;
        if (serverurl.indexOf("?") != -1) {
            serverurl = serverurl.replace(location.search, '');
        }
     
        var index = serverurl.lastIndexOf("/");
        serverurl = serverurl.substring(0, index - 1);
        serverurl = serverurl.concat('/_vti_bin/diagnostics.asmx');
     
        return serverurl;
    }
    
    // Method to post the SOAP packet to the Diagnostic web service.
    function postMessageToULSSvc(soapPacket) {
        $(document).ready(function () {
            $.ajax({
                url: getWebSvcUrl(),
                type: "POST",
                dataType: "xml",
                data: soapPacket, //soap packet.
                contentType: "text/xml; charset=\"utf-8\"",
                success: handleResponse, // Invoke when the web service call is successful.
                error: handleError// Invoke when the web service call fails.
            });
        });
    }
    
    // Invoked when the web service call succeeds.
    function handleResponse(data, textStatus, jqXHR) {
        // Custom code...
        alert('Successfully logged trace to ULS');
     }
     
    // Invoked when the web service call fails.
    function handleError(jqXHR, textStatus, errorThrown) {
        //Custom code...
            alert('Error occurred in executing the web request');
    }
    

    The postMessageToULSSvc function perform an asynchronous HTTP request and posts the SOAP packet to the Diagnostics web service. The URL of the Diagnostics web service is dynamically constructed in the getWebSvcUrl function. The postMessageToULSSvc function also defines respective handlers for success or error responses. Instead of displaying alerts on the handlers, other logic can be written as required by the application.

  9. Finally, you need a function that is invoked automatically when an error occurs in the code. To register this function globally for all the JavaScript functions on the page, you attach this function to the window.onerror event. Add the following lines of code as the first line of the ULSLogScript.js file.
    // Registering the ULS logging function on a global level.
    window.onerror = logErrorToULS;
    
    // Set default value for teamName.
    var teamName = "Custom SharePoint Application";
    
    // Further add the logErrorToULS method at the end of the script.
    
    // Function to log messages to Diagnostic web service.
    // Invoked by the window.onerror message.
    function logErrorToULS(msg, url, linenumber) {
        var ulsObj = new ulsObject();
        ulsObj.message = "Error occurred: " + msg;
        ulsObj.file = url.substring(url.lastIndexOf("/") + 1); // Get the current file name.
        ulsObj.line = linenumber;
        ulsObj.stack = getCallStack(logErrorToULS.caller); // Create error call stack.
        ulsObj.client = getClientInfo(); // Create client information.
        ulsObj.team = teamName; // Declared in the consumer script.
        ulsObj.originalFile = ulsObj.file;
    
        var soapPacket = generateErrorPacket(ulsObj); // Create the soap packet.
        postMessageToULSSvc(soapPacket); // Post to the web service.
    
        return true;
    }
    

    The line window.onerror = logErrorToULS links the function logErrorToULS with the window.onerror event. This enables you to capture the required information such as the error message, line number, and error file. The teamName variable enables you to set a unique value with respect to the calling application. This can be overridden in the consumer scripts. The logErrorToULS function creates an instance of the ulsObj object and populates all of its properties. Here, you see that the stack property of the ulsObj object is set to logErrorToULS.caller. This provides the function definition of the method that invoked this function. The postMessageToULSSvc function is called to write the error information to the trace logs.

    Note
    Because you cannot specify the security level of the trace message in the SendClientScriptErrorReport method, the message property of the ulsObj object is prepended with text indicating that the message logged is part of an exception.
  10. The logErrorToULS function is called automatically when an error occurs on the page, but to intentionally write a trace message to the ULS, you need one more function which can be called specifically. Add the following function just below the logErrorToULS function.
    // Function to log message to Diagnostic web service.
    // Specifically invoked by a consumer method.
    function logMessageToULS(message, fileName) {
        if (message != null) {
            var ulsObj = new ulsObject();
            ulsObj.message = message;
            ulsObj.file = fileName;
            ulsObj.line = 0; // We don't know the line, so we set it to zero.
            ulsObj.stack = getCallStack(logMessageToULS.caller);
            ulsObj.client = getClientInfo();
            ulsObj.team = teamName;
            ulsObj.originalFile = ulsObj.file;
    
            var soapPacket = generateErrorPacket(ulsObj);
            postMessageToULSSvc(soapPacket);
        }
    }
    

    Unlike the logErrorToULS function, the logMessageToULS function accepts the message to be logged and the file name where the error occurred as parameters.

So far, you have created the required logic to write trace messages or exceptions to the ULS logs. Now you need to write a function that consumes the logErrorToULS or logMessageToULS functions.

To create the consumer application

  1. Navigate to your SharePoint site.
  2. Create a new Web Parts page.
  3. Add a Content Editor Web Part in any of the available Web Part zones.
  4. Edit the Web Part and type or paste the following text in the HTML source.
    <script src="/_layouts/LoggingSample/jquery-1.6.4.min.js" type="text/javascript"></script>
     <script src="/_layouts/LoggingSample/ULSLogScript.js" type="text/javascript"></script>
     <script type="text/javascript">
            var teamName = "Simple ULS Logging";
            function doWork() {
                unknownFunction();
            }
            function logMessage() {
                logMessageToULS('This is a trace message from CEWP', 'loggingsample.aspx');
            }
     </script>
    
    <input type="button" value="Log Exception" onclick="doWork();" />
        <br /><br />
      <input type="button" value="Log Trace" onclick="logMessage();" />
    
    

    This HTML code contains the required script references to include the JQuery library and the ULSLogScript.js file that you created in the previous section. It also contains two inline JavaScript functions and the respective input buttons to invoke them.

    To demonstrate exception handling, the doWork function makes a call to an unknownFunction function that does not exist. This invokes an exception that is intercepted and logged by the ULSLogScript.js code. To demonstrate message logging, the logMessage function calls the logMessageToULS function to write trace messages to ULS.

  5. Exit the web page design mode.
  6. Save the Web Parts page.
Finally, you need to configure the Diagnostic Logging Service in SharePoint Central Administration to ensure that the traces and exceptions logged from the Diagnostics web service are visible in the ULS logs.

To configure the Diagnostic Logging Service

  1. Open SharePoint Central Administration.
  2. From the Quick Launch, click Monitoring.
    Figure 1. Click the Monitoring option

    Click the Monitoring option

  3. On the monitoring page, in the Reporting section, click Configure diagnostic logging.
    Figure 2. Click Configure diagnostic logging

    Click Configure diagnostic logging

  4. From all categories, expand the SharePoint Foundation category.
    Figure 3. Expand the SharePoint Foundation category

    Expand the SharePoint Foundation category

  5. Select the Unified Logging Service category.
    Figure 4. Select Unified Logging Service

    Select Unified Logging Service

  6. In the Least critical event to report to the trace log list, select Verbose.
    Figure 5. In the dropdown list, select Verbose

    From the dropdown list, select Verbose

  7. Click OK to save the configuration.

The server is now ready to log traces sent by the Diagnostics web service to ULS. These traces appear under the category Unified Logging Service with a severity set to Verbose.

In this section, you test the application by raising an alert that is logged to the ULS.

To test the logging application

  1. Click the Log Exception button inside the Content Editor Web Part (CEWP).
    Figure 6. Click the Log Exception button

    Click the Log Exception button

  2. An alert indicates that the message has been logged successfully to ULS.
    Figure 7. Confirmation message

    Confirmation message

  3. To see the exception details in the ULS logs, navigate to the Logs folder in the SharePoint hive ({SP Installation Path}\14\LOGS\)
  4. Because multiple log files can be present in the Logs folder, perform a descending sort on the Date modified field.
  5. Open the recent log file in a text editor such as Notepad and then search for Simple ULS Logging (the team name specified previously). Now you should see all the web service parameters as supplied from the client application, from Message to OriginalFileName, in the following text:

    10/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a084Verbose Message: Error occured: The value of the property ‘unknownFunction’ is null or undefined, not a Function object543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a085Verbose File: ULS%20Logging%20Sample.aspx543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a086Verbose Line: 676543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a087Verbose Client: <client><browser name=’Internet Explorer’ version=’8.0′></browser><language>en-us</language></client>543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a088Verbose Stack: <stack><function depth=’0′ signature=’ doWork() ‘>function doWork() { unknownFunction(); }</function></stack>543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a089Verbose TeamName: Simple ULS Logging543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a08aVerbose OriginalFileName: ULS%20Logging%20Sample.aspx543a6672-9078-452f-93bd-545c4babefd5

    Looking at the log message, you can easily determine that the exception occurred because unknownFunction was not defined, along with other relevant details such as the line number.

  6. Similarly, clicking Log Trace on the CEWP writes the following trace message:

    10/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a084Verbose Message: This is a trace message from CEWP8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a085Verbose File: loggingsample.aspx8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a086Verbose Line: 08c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a087Verbose Client: <client><browser name=’Internet Explorer’ version=’8.0′></browser><language>en-us</language></client>8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a088Verbose Stack: <stack><function depth=’1′ signature=’ logMessage() ‘></function></stack>8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a089Verbose TeamName: Simple ULS Logging8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a08aVerbose OriginalFileName: loggingsample.aspx8c182889-c323-46f3-a287-a538c379f152

    In this log, you see that a trace message was sent by the logMessage function.

In a sandboxed solution, you cannot deploy any file to the server file system (the Layouts folder), so to make the ULS logging script work, you need to make the following two changes:

  1. Provision the jquery-1.6.4.min.js and ULSLogScript.js file to a Site Collection–relative Styles Library folder (or any other library with appropriate read access).
  2. Update the script references in the consumer Content Query Web Part (CQWP), as needed.

The remaining functionality should work as is.

10 Must-Have Visual Studio Productivity Add-Ins I use everyday and recommend to every . Net Developer

Visual Studio provides a rich extensibility model that developers at Microsoft and in the community have taken advantage of to provide a host of quality add-ins. Some add-ins contribute significant how-did-I-live-without-this functionality, while others just help you automate that small redundant task you constantly find yourself performing.
ClickHandler.ashx
10 Must-Have Add-Ins

TestDriven.NET
GhostDoc
Paster
CodeKeep
PInvoke.NET
VSWindowManager PowerToy
WSContractFirst
VSMouseBindings
CopySourceAsHTML
Cache Visualizer

In this article, I introduce you to some of the best Visual Studio add-ins available today that can be downloaded for free. I walk through using each of the add-ins, but because I am covering so many I only have room to introduce you to the basic functionality.
Each of these add-ins works with Visual Studio .NET 2003 and most of them already have versions available for Visual Studio 2005. If a Visual Studio 2005 version is not available as of the time of this writing, it should be shortly.

 

TestDriven.NET
Test-driven development is the practice of writing unit tests before you write code, and then writing the code to make those tests pass. By writing tests before you write code, you identify the exact behavior your code should exhibit and, as a bonus, at the end you have 100 percent test coverage, which makes extensive refactoring possible.
NUnit gives you the ability to write unit tests using a simple syntax and then execute those tests one by one or altogether against your app. If you are using Visual Studio Team System, you have unit testing functionality built into the Visual Studio IDE. Before the Visual Studio Team System, there was TestDriven.NET, an add-in that integrates NUnit directly into the Visual Studio IDE. Using a non-Team System version of Visual Studio 2005 or Visual Studio .NET 2003, is, in my opinion, still the best solution available.
TestDriven.NET adds unit testing functionality directly to the Visual Studio IDE. Instead of writing a unit test, switching over to the NUnit GUI tool, running the test, then switching back to the IDE to code, and so on, you can do it all right in the IDE.

 

Figure 1 New Testing Options from TestDriven.NET 
After installing TestDriven.NET you will find a number of new menu items on the right-click context menu as shown in Figure 1. You can right-click directly on a unit test and run it. The results will be displayed in the output window as shown in Figure 2.

 

Figure 2 Output of a Unit Test 
While executing unit tests in the IDE is invaluable by itself, perhaps the best feature is that you can also quickly launch into the debugger by right-clicking on a test and selecting Test With | Debugger. This will launch the debugger and then execute your unit tests, hitting any breakpoints you have set in those tests.
In fact, it doesn’t even have to be a unit test for TestDriven.NET to execute it. You could just as easily test any public method that returns void. This means that if you are testing an old app and need to walk through some code, you can write a quick test and execute it right away.
TestDriven.NET is an essential add-in if you work with unit tests or practice test-driven development. (If you don’t already, you should seriously consider it.) TestDriven.NET was written by Jamie Cansdale and can be downloaded from http://www.testdriven.net.

 

GhostDoc
XML comments are invaluable tools when documenting your application. Using XML comments, you can mark up your code and then, using a tool like nDoc, you can generate help files or MSDN-like Web documentation based on those comments. The only problem with XML documentation is the time it takes to write it you often end up writing similar statements over and over again. The goal of GhostDoc is to automate the tedious parts of writing XML comments by looking at the name of your class or method, as well as any parameters, and making an educated guess as to how the documentation should appear based on recommended naming conventions. This is not a replacement for writing thorough documentation of your business rules and providing examples, but it will automate the mindless part of your documentation generation.
For instance consider the method shown here:

private void SavePerson(Person person) { }

After installing GhostDoc, you can right-click on the method declaration and choose Document this. The following comments will then be added to your document:

///  /// Saves the person. ///  ///Person. private void SavePerson(Person person) { }

As you can see, GhostDoc has automatically generated a summary based on how the method was named and has also populated the parameter comments. Don’t stop hereyou should add additional comments stating where the person is being saved to or perhaps give an example of creating and saving a person. Here is my comment after adding some additional information by hand:

///  /// Saves a person using the configured persistence provider. ///  ///The Person to be saved private void SavePerson(Person person) { }
Adding these extra comments is much easier since the basic, redundant portion is automatically generated by GhostDoc. GhostDoc also includes options that allow you to modify existing rules and add additional rules that determine what kind of comments should be generated.
GhostDoc was written by Roland Weigelt and can be downloaded from http://www.roland-weigelt.de/ghostdoc.

 

Smart Paster
Strings play a large role in most applications, whether they are comments being used to describe the behavior of the system, messages being sent to the user, or SQL statements that will be executed. One of the frustrating parts of working with strings is that they never seem to paste correctly into the IDE. When you are pasting comments, the strings might be too long or not aligned correctly, leaving you to spend time inserting line breaks, comment characters, and tabbing. When working with strings that will actually be concatenated, you have to do even more work, usually separating the parts of the string and inserting concatenation symbols or using a string builder.
The Smart Paster add-in helps to limit some of this by providing a number of commands on the right-click menu that let you paste a string from the clipboard into Visual Studio using a certain format. After installing Smart Paster, you will see the new paste options available on the right-click menu (see Figure 3).

 

Figure 3 String Pasting Options from Smart Paster 
For instance, you might have the following string detailing some of your business logic:

To update a person record, a user must be a member of the customer service group or the manager group. After the person has been updated, a letter needs to be generated to notify the customer of the information change.

You can copy and paste this into Visual Studio using the Paste As | Comment option, and you would get the following:

//To update a person record a user must be a member of the customer //service group or the manager group. After the person has been updated //a letter needs to be generated to notify the customer of the //information change.
The correct comment characters and carriage returns are automatically inserted (you can configure at what length to insert a character return). If you were inserting this text without the help of Smart Paster, it would paste as one long line, forcing you to manually add all the line breaks and comment characters. As another example, let’s say you have the following error message that you need to insert values into at run time:

You do not have the correct permissions to perform <insert action>. You must be a member of the <insert group> to perform this action.

Using the Paste As | StringBuilder command, you can insert this string as a StringBuilder into Visual Studio. The results would look like this:

StringBuilder stringBuilder = new StringBuilder(134); stringBuilder.AppendFormat( @"You do not have the correct permissions to "); stringBuilder.AppendFormat( @"perform . You must be a member of "); stringBuilder.AppendFormat( @"the  to perform this action.");

It would then simply be a matter of modifying the code to replace the variables sections of the string:

StringBuilder stringBuilder = new StringBuilder(134); stringBuilder.AppendFormat( @"You do not have the correct permissions to "); stringBuilder.AppendFormat( @"perform {0}. You must be a member of ", action); stringBuilder.AppendFormat( @"the {0} to perform this action.", group);

Smart Paster is a time-saving add-in that eliminates a lot of the busy work associated with working with strings in Visual Studio. It was written by Alex Papadimoulis.

 

CodeKeep
Throughout the process of software development, it is common to reuse small snippets of code. Perhaps you reuse an example of how to get an enum value from a string or a starting point on how to implement a certain pattern in your language of choice.
Visual Studio offers some built-in functionality for working with code snippets, but it assumes a couple of things. First, it assumes that you are going to store all of your snippets on your local machine, so if you switch machines or move jobs you have to remember to pack up your snippets and take them with you. Second, these snippets can only be viewed by you. There is no built-in mechanism for sharing snippets between users, groups, or the general public.
This is where CodeKeep comes to the rescue. CodeKeep is a Web application that provides a place for people to create and share snippets of code in any language. The true usefulness of CodeKeep is its Visual Studio add-in, which allows you to search quickly through the CodeKeep database, as well as submit your own snippets.
After installing CodeKeep, you can search the existing code snippets by selecting Tools | CodeKeep | Search, and then using the search screen shown in Figure 4.

 

Figure 4 Searching Code Snippets with CodeKeep 
From this screen you can view your own snippets or search all of the snippets that have been submitted to CodeKeep. When searching for snippets, you see all of the snippets that other users have submitted and marked as public (you can also mark code as private if you want to hide your bad practices). If you find the snippet you are looking for, you can view its details and then quickly copy it to the clipboard to insert into your code.
You can also quickly and easily add your own code snippets to CodeKeep by selecting the code you want to save, right-clicking, and then selecting Send to CodeKeep.This will open a new screen that allows you to wrap some metadata around your snippet, including comments, what language it is written in, and whether it should be private or public for all to see.
Whenever you write a piece of code and you can imagine needing to use it in the future, simply take a moment to submit it; this way, you won’t have to worry about managing your snippets or rewriting them in the future. Since CodeKeep stores all of your snippets on the server, they are centralized so you don’t have to worry about moving your code from system to system or job to job.
CodeKeep was written by Arcware’s Dave Donaldson and is available from http://www.codekeep.net.

 

PInvoke.NET
API calls within the .NET Framework. One of the hard parts of using P/Invoke is determining the method signature you need to use; this can often be an exercise in trial and error. Sending incorrect data types or values to an unmanaged API can often result in memory leaks or other unexpected results.
PInvoke.NET is a wiki that can be used to document the correct P/Invoke signatures to be used when calling unmanaged Win32 APIs. A wiki is a collaborative Web site that anyone can edit, which means there are thousands of signatures, examples, and notes about using P/Invoke. Since the wiki can be edited by anyone, you can contribute as well as make use of the information there.
While the wiki and the information stored there are extremely valuable, what makes them most valuable is the PInvoke.NET Visual Studio add-in. Once you have downloaded and installed the PInvoke.NET add-in, you will be able to search for signatures as well as submit new content from inside Visual Studio. Simply right-click on your code file and you will see two new context items: Insert PInvoke Signatures and Contribute PInvoke Signatures and Types.

Figure 5 Using PInvoke.NET 
When you choose Insert PInvoke Signatures, you’ll see the dialog box shown in Figure 5. Using this simple dialog box, you can search for the function you want to call. Optionally, you can include the module that this function is a part of. Now, a crucial part of all major applications is the ability to make the computer Beep. So I will search for the Beep function and see what shows up. The results can be seen in Figure 6.

Figure 6 Finding the Beep Function in PInvoke.NET 
.NET. The wiki suggests alternative managed APIs, letting you know that there is a new method System.Console.Beep in the .NET Framework 2.0.
There is also a link at the bottom of the dialog box that will take you to the corresponding page on the wiki for the Beep method. In this case, that page includes documentation on the various parameters that can be used with this method as well as some code examples on how to use it.
After selecting the signature you want to insert, click the Insert button and it will be placed into your code document. In this example, the following code would be automatically created for you:

[DllImport("kernel32.dll", SetLastError=true)] [return: MarshalAs(UnmanagedType.Bool)] static extern bool Beep( uint dwFreq, uint dwDuration);

You then simply need to write a call to this method and your computer will be beeping in no time.

The PInvoke.NET wiki and Visual Studio add-in take away a lot of the pain and research time sometimes involved when working with the Win32 API from managed code. The wiki can be accessed at http://www.pinvoke.net, and the add-in can be downloaded from the Helpful Tools link found in the bottom-left corner of the site.

 

VSWindowManager PowerToy
The Visual Studio IDE includes a huge number of different Windows, all of which are useful at different times. If you are like me, you have different window layouts that you like to use at various points in your dev work. When I am writing HTML, I like to hide the toolbox and the task list window. When I am designing forms, I want to display the toolbox and the task list. When I am writing code, I like to hide all the windows except for the task list. Having to constantly open, close, and move windows based on what I am doing can be both frustrating and time consuming.
Visual Studio includes the concept of window layouts. You may have noticed that when you start debugging, the windows will automatically go back to the layout they were in the last time you were debugging. This is because Visual Studio includes a normal and a debugging window layout.
Wouldn’t it be nice if there were additional layouts you could use for when you are coding versus designing? Well, that is exactly what VSWindowManager PowerToy does.
After installing VSWindowManager PowerToy, you will see some new options in the Window menu as shown in Figure 7.

 

Figure 7 VSWindowManager Layout Commands 
The Save Window Layout As menu provides commands that let you save the current layout of your windows. To start using this power toy, set up your windows the way you like them for design and then navigate to the Windows | Save Window Layout As | My Design Layout command. This will save the current layout. Do the same for your favorite coding layout (selecting My Coding Layout), and then for up to three different custom layouts.
VSWindowManager will automatically switch between the design and coding layouts depending on whether you are currently viewing a designer or a code file. You can also use the commands on the Apply Window Layout menu to choose from your currently saved layouts. When you select one of the layouts you have saved, VSWindowManager will automatically hide, show, and rearrange windows so they are in the exact same layout as before.
VSWindowManager PowerToy is very simple, but can save you a lot of time and frustration. VSWindowManager is available from vswindowmanager.codeplex.com/.

 

WSContractFirst
Visual Studio makes creating Web services deceptively easy You simply create an .asmx file, add some code, and you are ready to go. ASP.NET can then create a Web Services Description Language (WSDL) file used to describe behavior and message patterns for your Web service.
There are a couple problems with letting ASP.NET generate this file for you. The main issue is that you are no longer in control of the contract you are creating for your Web service. This is where contract-first development comes to the rescue. Contract-first development, also called contract-driven development, is the practice of writing the contract (the WSDL file) for your Web service before you actually write the Web service itself. By writing your own WSDL file, you have complete control over how your Web service will be seen and used, including the interface and data structures that are exposed.
Writing a WSDL document is not a lot of fun. It’s kind of like writing a legal contract, but using lots of XML. This is where the WSContractFirst add-in comes into play. WSContractFirst makes it easier to write your WSDL file, and will generate client-side and server-side code for you, based on that WSDL file. You get the best of both worlds: control over your contract and the rapid development you are used to from Visual Studio style services development.
The first step to using WSContractFirst is to create your XML schema files. These files will define the message or messages that will be used with your Web services. Visual Studio provides an easy-to-use GUI interface to define your schemasthis is particularly helpful since this is one of the key steps of the Web service development process. Once you have defined your schemas you simply need to right-click on one of them and choose Create WSDL Interface Description. This will launch the Generate WSDL Wizard, the first step of which is shown in Figure 8.

Figure 8 Building a WSDL File with WSContractFirst  
Step 1 collects the basics about your service including its name, namespace, and documentation. Step 2 allows you to specify the .xsd files you want to include in your service. The schema you selected to launch this wizard is included by default. Step 3 allows you to specify the operations of your service. You can name the operation as well as specify whether it is a one-way or request/response operation. Step 4 gives you the opportunity to enter the details for the operations and messages. Step 5 allows you to specify whether a element should be created and whether or not to launch the code generation dialog automatically when this wizard is done. Step 6 lets you specify alternative .xsd paths. Once the wizard is complete, your new WSDL file is added to your project.
Now that you have your WSDL file there are a couple more things WSContractFirst, can do for you. To launch the code generation portion of WSContractFirst, you simply need to right-click on your WSDL file and select Generate Web Service Code. This will launch the code generation dialog box shown in Figure 9.

Figure 9 WSContractFirst Code Generation Options 
You can choose to generate a client-side proxy or a service-side stub, as well as configure some other options about the code and what features it should include. Using these code generation features helps speed up development tremendously.
If you are developing Web services using Visual Studio you should definitely look into WSContractFirst and contract-first development. WSContractFirst was written by Thinktecture’s Christian Weyer.

 

VSMouseBindings
Your mouse probably has five buttons, so why are you only using three of them? The VSMouseBindings power toy provides an easy to use interface that lets you assign each of your mouse buttons to a Visual Studio command.
VSMouseBindings makes extensive use of the command pattern. You can bind mouse buttoms to various commands, such as open a new file, copy the selected text to the clipboard, or just about anything else you can do in Visual Studio. After installing VSMouseBindings you will see a new section in the Options dialog box called VsMouseBindings. The interface can be seen in Figure 10.

Figure 10 VSMouseBindings Options for Visual Studio 
As you can see, you can select a command for each of the main buttons. You probably shouldn’t mess around with the left and right mouse buttons, though, as their normal functionality is pretty useful. The back and forward buttons, however, are begging to be assigned to different commands. If you enjoy having functionality similar to a browser’s back and forward buttons, then you can set the buttons to the Navigate.Backward and Navigate.Forward commands inVisual Studio.
The Use this mouse shortcut in menu lets you set the scope of your settings. This means you can configure different settings when you are in the HTML designer as opposed to when you are working in the source editor.
VSMouseBindings is available from archive.msdn.microsoft.com/VSMouseBindings.

 

CopySourceAsHTML
Code is exponentially more readable when certain parts of that code are differentiated from the rest by using a different color text. Reading code in Visual Studio is generally much easier than trying to read code in an editor like Notepad.
Chances are you may have your own blog by now, or at least have spent some time reading them. Normally, when you try to post a cool code snippet to your blog it ends up being plain old text, which isn’t the easiest thing to read. This is where the CopySourceAsHTML add-in comes in to play. This add-in allows you to copy code as HTML, meaning you can easily post it to your blog or Web site and retain the coloring applied through Visual Studio.
After installing the CopySourceAsHTML add-in, simply select the code you want to copy and then select the Copy Source as HTML command from the right-click menu. After selecting this option you will see the dialog box shown in Figure 11.

Figure 11 Options for CopySourceAsHTML 

From here you can choose what kind of HTML view you want to create. This can include line numbers, specific tab sizes, and many other settings. After clicking OK, the HTML is saved to the clipboard. For instance, suppose you were starting with the following code snippet inside Visual Studio:

private long Add(int d, int d2) { return (long) d + d2; }
Figure 12 HTML Formatted Code  
After you select Copy As HTML and configure the HTML to include line numbers, this code will look like Figure 12 in the browser. Anything that makes it easier to share and understand code benefits all of us as it means more people will go to the trouble of sharing knowledge and learning from each other.
CopySourceAsHTML was written by Colin Coller and can be downloaded from copysourceashtml.codeplex.com/.

 

Cache Visualizer
Visual Studio 2005 includes a new debugging feature called visualizers, which can be used to create a human-readable view of data for use during the debugging process. Visual Studio 2005 includes a number of debugger visualizers by default, most notably the DataSet visualizer, which provides a tabular interface to view and edit the data inside a DataSet. While the default visualizers are very valuable, perhaps the best part of this new interface is that it is completely extensible. With just a little bit of work you can write your own visualizers to make debugging that much easier.
While a lot of users will write visualizers for their own custom complex types, some developers are already posting visualizers for parts of the Framework. I am going to look at one of the community-built visualizers that is already available and how it can be used to make debugging much easier.
The ASP.NET Cache represents a collection of objects that are being stored for later use. Each object has some settings wrapped around it, such as how long it will be cached for or any cache dependencies. There is no easy way while debugging to get an idea of what is in the cache, how long it will be there, or what it is watching. Brett Johnson saw that gap and wrote Cache Visualizer to examine the ASP.NET cache.
Once you have downloaded and installed the visualizer you will see a new icon appear next to the cache object in your debug windows, as shown in Figure 13.

Figure 13 Selecting Cache Visualizer While Debugging 
When you click on the magnifying glass to use the Cache Visualizer a dialog box appears that includes information about all of the objects currently stored in the ASP. NET cache, as you can see in Figure 14.

Figure 14 Cache Visualizer Shows Objects in the ASP.NET Cache 
Under Public Cache Entries, you can see the entries that I have added to the cache. The entries under Private Cache Entries are ones added by ASP.NET. Note that you can see the expiration information as well as the file dependency for the cache entry.
The Cache Visualizer is a great tool when you are working with ASP.NET. It is also representative of some of the great community-developed visualizers we will see in the future.

 

Wrapping It Up
While this article has been dedicated to freely available add-ins, there are also a host of add-ins that can be purchased for a reasonable price. I encourage you to check out these other options, as in some cases they can add some tremendous functionality to the IDE. This article has been a quick tour of some of the best freely available add-ins for Visual Studio. Each of these add-ins may only do a small thing, but together they help to increase your productivity and enable you to write better code.

What is Kendo UI

Kendo UI is an HTML5, jQuery-based framework for building modern web apps. The framework features lots of UI widgets, a rich data vizualization framework, an auto-adaptive Mobile framework, and all of the tools needed for HTML5 app development, such as Data Binding, Templating, Drag-and-Drop API, and more.

Kendoui

 

Kendo UI comes in different bundles:

  • Kendo UI Web – HTML5 widgets for desktop browsing experience.
  • Kendo UI DataViz – HTML5 data vizualization widgets.
  • Kendo UI Mobile – HTML5 framework for building hybrid mobile applications.
  • Kendo UI Complete – includes Kendo UI Web, Kendo UI DataViz and Kendo UI Mobile.
  • Telerik UI for ASP.NET MVC – Kendo UI Complete plus ASP.NET MVC wrappers for Kendo UI Web, DataViz and Mobile.
  • Telerik UI for JSP – Kendo UI Complete plus JSP wrappers for Kendo UI Web and Kendo UI DataViz.
  • Telerik UI for PHP – Kendo UI Complete plus PHP wrappers for Kendo UI Web and Kendo UI DataViz.

Installing and Getting Started with Kendo UI

You can download all Kendo UI bundles from the download page.

The distribution zip file contains the following:

  • /examples – quick start demos.
  • /js – minified JavaScript files.
  • /src – complete source code. Not available in the trial distribution.
  • /styles – minified CSS files and theme images.
  • /wrappers – server-side wrappers. Available in Telerik UI for ASP.NET MVC, JSP or PHP.
  • changelog.html – Kendo UI release notes.

Using Kendo UI

To use Kendo UI in your HTML page you need to include the required JavaScript and CSS files.

Kendo UI Web

  1. Download Kendo UI Web and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI Web distribution to your web application root directory.
  3. Include the Kendo UI Web JavaScript and CSS files in the head tag of your HTML page. Make sure the common CSS file is registered before the theme CSS file. Also make sure only one combined script file is registered. For more information, please refer to the Javascript Dependencies page.
    <!-- Common Kendo UI Web CSS -->
    <link href="styles/kendo.common.min.css" rel="stylesheet" />
    <!-- Default Kendo UI Web theme CSS -->
    <link href="styles/kendo.default.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI Web combined JavaScript -->
    <script src="js/kendo.web.min.js"></script>
    
  4. Initialize a Kendo UI Web Widget (the KendoDatePicker in this example):
    <!-- HTML element from which the Kendo DatePicker would be initialized -->
    <input id="datepicker" />
    <script>
    $(function() {
        // Initialize the Kendo DatePicker by calling the kendoDatePicker jQuery plugin
        $("#datepicker").kendoDatePicker();
    });
    </script>
    

Here is the complete example:

<!--doctype html>
<html>
    <head>
        <title>Kendo UI Web</title>
        <link href="styles/kendo.common.min.css" rel="stylesheet" />
        <link href="styles/kendo.default.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.web.min.js"></script>
    </head>
    <body>
        <input id="datepicker" />
        <script>
            $(function() {
                $("#datepicker").kendoDatePicker();
            });
        </script>
    </body>
</html>

Kendo UI DataViz

  1. Download Kendo UI DataViz and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI DataViz distribution to your web application root directory.
  3. Include the Kendo UI DataViz JavaScript and CSS files in the head tag of your HTML page:
    <!-- Kendo UI DataViz CSS -->
    <link href="styles/kendo.dataviz.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI DataViz combined JavaScript -->
    <script src="js/kendo.dataviz.min.js"></script>
    
  4. Initialize a Kendo UIDataViz Widget (the Kendo Radial Gauge in this example):
    <!-- HTML element from which the Kendo Radial Gauge would be initialized -->
    <div id="gauge"></div>
    <script>
    $(function() {
        $("#gauge").kendoRadialGauge();
    });
    </script>
    

Here is the complete example:

<!--doctype html>
<html>
    <head>
        <title>Kendo UI DataViz</title>
        <link href="styles/kendo.dataviz.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.dataviz.min.js"></script>
    </head>
    <body>
        <div id="gauge"></div>
        <script>
        $(function() {
            $("#gauge").kendoRadialGauge();
        });
        </script>
    </body>
</html>

Kendo UI Mobile

  1. Download Kendo UI Mobile and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI Mobile distribution to your web application root directory.
  3. Include the Kendo UI Mobile JavaScript and CSS files in the head tag of your HTML page:
    <!-- Kendo UI Mobile CSS -->
    <link href="styles/kendo.mobile.all.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI Mobile combined JavaScript -->
    <script src="js/kendo.mobile.min.js"></script>
    
  4. Initialize a Kendo Mobile Application
    <!-- Kendo Mobile View -->
    <div data-role="view" data-title="View" id="index">
        <!--Kendo Mobile Header -->
        <header data-role="header">
            <!--Kendo Mobile NavBar widget -->
            <div data-role="navbar">
                <span data-role="view-title"></span>
            </div>
        </header>
        <!--Kendo Mobile ListView widget -->
        <ul data-role="listview">
          <li>Item 1</li>
          <li>Item 2</li>
        </ul>
        <!--Kendo Mobile Footer -->
        <footer data-role="footer">
            <!-- Kendo Mobile TabStrip widget -->
            <div data-role="tabstrip">
                <a data-icon="home" href="#index">Home</a>
                <a data-icon="settings" href="#settings">Settings</a>
            </div>
        </footer>
    </div>
    <script>
    // Initialize a new Kendo Mobile Application
    var app = new kendo.mobile.Application();
    </script>
    

Here is the complete example:

<!--doctype html>
<html>
    <head>
        <title>Kendo UI Mobile</title>
        <link href="styles/kendo.mobile.all.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.mobile.min.js"></script>
    </head>
    <body>
        <div data-role="view" data-title="View" id="index">
            <header data-role="header">
                <div data-role="navbar">
                    <span data-role="view-title"></span>
                </div>
            </header>
            <ul data-role="listview">
              <li>Item 1</li>
              <li>Item 2</li>
            </ul>
            <footer data-role="footer">
                <div data-role="tabstrip">
                    <a data-icon="home" href="#index">Home</a>
                    <a data-icon="settings" href="#settings">Settings</a>
                </div>
            </footer>
        </div>
        <script>
        var app = new kendo.mobile.Application();
        </script>
    </body>
</html>

Server-side wrappers

Kendo UI provides server-side wrappers for ASP.NET, PHP and JSP. Those are classes (ASP.NET and PHP) or XML tags (JSP) which allow configuring the Kendo UI widgets with server-side code.

You can find more info about the server-side wrappers here:

  • Get Started with Telerik UI for ASP.NET MVC
  • Get Started with Telerik UI for JSP
  • Get Started with Telerik UI for PHP

Next Steps

Kendo UI videos

You can watch the videos in the Kendo UI YouTube channel.

Kendo UI Dojo

A lot of interactive tutorials are available in the Kendo UI Dojo.

Further reading

  1. Kendo UI Widgets
  2. Data Attribute Initialization
  3. Requirements

Examples

  1. Online demos
  2. Code library projects
  3. Examples availableongithub
    • ASP.NET MVC examples
    • ASP.NET MVC Kendo UI Music Store
    • ASP.NET WebForms examples
    • JSP examples
    • Kendo Mobile Sushi
    • PHP examples
    • Ruby on Rails examples

Help Us Improve Kendo UI Documentation, Samples, Tutorials and Demos

The Kendo UI team would LOVE your help to improve our documentation. We encourage you to contribute in the way that you choose:

Submit a New Issue at GitHub

Open a new issue on the topic if it does not exist already.When creating an issue, please provide a descriptive title, be as specific as possible and link to the document in question. If you can provide a link to the closest anchor to the issue, that is even better.

Update the Documentation at GitHub

This is the most direct method. Follow the contribution instructions. The basic steps are that you fork our documentation and submit a pull request. That way you can contribute to exactly where you found the error and our technical writing team just needs to approve your change request. Please use only standard Markdown and follow the directions at the link. If you find an issue in the docs, or even feel like creating new content, we are happy to have your contributions!

Forums

You can also go to the Kendo UI Forums and leave feedback. This method will take a bit longer to reach our documentation team, but if you like the accountability of forums and you want a fast reply from our amazing support team, leaving feedback in the Kendo UI forums guarantees that your suggestion has a support number and that we’ll follow up on it.Thank you for contributing to the Kendo UI community!

NEW “Filter My Lists” Web Part now available + FREE Metro UI Master Page when ordering

“Filter My Lists” Web Part

Saves you time with optimal performance

Find what you are looking for with a few clicks, even in cluttered sites and lists with masses of items and documents.

Find exactly what you need and stop wasting your time browsing SharePoint.
Filter the content of multiple lists and libraries in a single   step.

Combine search and metadata filters

In a single panel combine item, document and attachment searches with metadata keyword searches and managed metadata filters.

Select multiple filter values from drop-down lists or alternatively use the keyword search of metadata fields with the help of wildcard characters and logical operators.

Export filtered views to Excel

Export filtered views and data to Excel. A print view enables you to print your results in a clear printable format with a single  click.

Keep views clear and concise

Provides a complete set of filters without cluttering list views and keeps your list views clear, concise and speedy. Enables you to filter SharePoint using columns which aren’t visible in list views.

Refine filters and save them for future use, whether private, to share with others or to use as default filters.

FREE Metro Style UI Master Page

 

Screen Capture Medium

Modern UI Master Page and Styles for SharePoint 2010.

This will give the Metro/Modern UI styling of SharePoint 2013 to your SharePoint 2010 team sites.

Features include:
– Quick launch styling
– Global navigation and drop-down styling
– Search box styling and layout change
– Web part header styling
– Segoe UI font

Architecture and Practical Application – BizTalk Adapter for mySAP Business Suite

Architecture for BizTalk Adapter for mySAP Business Suite

f36f4-netvjava

The Microsoft BizTalk Adapter for mySAP Business Suite implements a Windows Communication Foundation (WCF) custom binding, which contains a single custom transport binding element that enables communication with an SAP system.

biztalk-accelerator

The SAP adapter is wrapped by the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK runtime and is exposed to applications through the WCF channel architecture. The SAP adapter communicates with the SAP system through either the 64-bit or 32-bit version of the SAP Unicode RFC SDK (librfc32u.dll).

The following figure shows the end-to-end architecture for solutions that are developed by using the SAP adapter.
SAP End-to-End Architecture
Consuming the Adapter

The SAP adapter exposes the SAP system as a WCF service to client applications. Client applications exchange SOAP messages with the SAP adapter through WCF channels to perform operations and to access data on the SAP system. The preceding figure shows four ways in which the SAP adapter can be consumed.

image
• Through a WCF channel application that performs operations on the SAP system by using the WCF channel model to exchange SOAP messages directly with the SAP adapter. For more information about developing solutions for the SAP adapter by using WCF channel model programming, see Developing Applications by Using the WCF Channel Model.

• Through a WCF service model application that calls methods on a WCF client to perform operations on the SAP system. A WCF client models the operations exposed by the SAP adapter as .NET methods. You can use the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK or the svcutil.exe tool to create a WCF client class from metadata exposed by the SAP adapter. For more information about WCF service model programming and the SAP adapter, see Developing Applications by Using the WCF Service Model.

• Through a BizTalk port that is configured to use the BizTalk WCF-Custom adapter with the SAP Binding configured as the binding for the WCF-Custom transport type in a BizTalk Server application. The BizTalk WCF-Custom adapter enables communication between a BizTalk Server application and a WCF service.

The BizTalk WCF-Custom adapter supports custom WCF bindings through its WCF-Custom transport type, which enables you to configure any WCF binding exposed to the configuration system as the binding used by the BizTalk WCF-Custom adapter. For more information about how to use the SAP adapter in BizTalk Server solutions, see Developing BizTalk Applications. BizTalk transactions are supported by the BizTalk Layered Channel binding element which can be loaded by setting a binding property on the SAP Binding.

• Through an IIS-hosted Web service. In this scenario, the SAP adapter is exposed through a WCF Service proxy, which is hosted in IIS by using one of the standard WCF HTTP bindings.

• Through the .NET Framework Data Provider for mySAP Business Suite. The Data Provider for SAP runs on top of the SAP adapter and provides an ADO.NET interface to an SAP system.

The SAP adapter and the SAP RFC library are always hosted in-process with the application or service that consumes the adapter.

The SAP Adapter and WCF

WCF presents a programming model based on the exchange of SOAP messages over channels between clients and services. These messages are sent between endpoints exposed by a communicating client and service.

An endpoint consists of an endpoint address which specifies the location at which messages are received, a binding which specifies the communication protocols used to exchange messages, and a contract which specifies the operations and data types exposed by the endpoint.

A binding consists of one or more binding elements that stack on top of each other to define how messages are exchanged with the endpoint.

 

At a minimum, a binding must specify the transport and encoding used to exchange messages with the endpoint. Message exchange between endpoints occurs over a channel stack that is composed of one or more channels. Each channel is a concrete implementation of one of the binding elements in the binding configured for the endpoint.

For more information about WCF and the WCF programming model, see “Windows Communication Foundation” at http://go.microsoft.com/fwlink/?LinkId=89726.

The Microsoft BizTalk Adapter for mySAP Business Suite exposes a WCF custom binding, the SAP Binding (Microsoft.Adapters.SAP.SAPBinding). By default, this binding contains a single custom transport binding element, the SAP Adapter Binding Element (Microsoft.Adapters.SAP.SAPAdapter), which enables operations on an SAP system. When using the SAP adapter with BizTalk Server, you can set the EnableBizTalkCompatibilityMode binding property to load a custom binding element, the BizTalk Layered Channel Binding Element, on top of the SAP Adapter Binding Element. The BizTalk Layered Channel Binding Element is implemented internally by the SAP adapter and is not exposed outside the SAP Binding.

Microsoft.Adapters.SAP.SAPBinding (the SAP Binding) and Microsoft.Adapters.SAP.SAPAdapter (the SAP Adapter Binding Element) are public classes and are also exposed to the configuration system. Because the SAP Adapter Binding Element is exposed publicly, you can build your own custom WCF bindings capable of extending the functionality of the SAP adapter. For example, you could implement a custom binding to support Enterprise Single Sign-On (SSO) in a WCF channel or a WCF service model programming solution, to aggregate database operations into a single multifunction operation, or to perform schema transformation between operations implemented by a custom application and operations on the SAP system.

The SAP adapter is built on top of the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK and runs on top of the WCF LOB Adapter SDK runtime. The WCF LOB Adapter SDK provides a software framework and tooling infrastructure that the SAP adapter leverages to provide a rich set of features to users and adapter clients.

The Connection to the SAP System

The SAP adapter connects with the SAP system through the SAP Unicode RFC SDK Library (librfc32u.dll). The SAP adapter supports both the 32 bit and the 64 bit versions of the SAP RFC SDK. The SAP RFC SDK enables external programs to call ABAP functions on a SAP system.

You establish a connection to an SAP system by providing a connection URI to the SAP adapter. The SAP adapter supports the following kinds of connections to an SAP system:
• An application host–based connection (A), in which the SAP adapter connects directly to an SAP application server.

• A load balancing connection (B), in which the SAP adapter connects to an SAP messaging server.

• A destination-based connection (D), in which the connection to the SAP system is specified by a destination in the saprfc.ini configuration file. A, B, and R type connections are supported.

• A listener connection (R), in which the adapter receives RFCs, tRFC and IDOCs through an RFC Destination on the SAP system that is specified by a listener host, a listener gateway service, and a listener program ID, either directly in the connection URI or by an R-based destination in the saprfc.ini configuration file.

Architecture for BizTalk Adapter for mySAP Business Suite

This topic has not yet been rated – Rate this topic

The Microsoft BizTalk Adapter for mySAP Business Suite implements a Windows Communication Foundation (WCF) custom binding, which contains a single custom transport binding element that enables communication with an SAP system. The SAP adapter is wrapped by the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK runtime and is exposed to applications through the WCF channel architecture. The SAP adapter communicates with the SAP system through either the 64-bit or 32-bit version of the SAP Unicode RFC SDK (librfc32u.dll). The following figure shows the end-to-end architecture for solutions that are developed by using the SAP adapter.
SAP End-to-End Architecture
Consuming the Adapter

The SAP adapter exposes the SAP system as a WCF service to client applications. Client applications exchange SOAP messages with the SAP adapter through WCF channels to perform operations and to access data on the SAP system. The preceding figure shows four ways in which the SAP adapter can be consumed.
• Through a WCF channel application that performs operations on the SAP system by using the WCF channel model to exchange SOAP messages directly with the SAP adapter. For more information about developing solutions for the SAP adapter by using WCF channel model programming, see Developing Applications by Using the WCF Channel Model.

• Through a WCF service model application that calls methods on a WCF client to perform operations on the SAP system. A WCF client models the operations exposed by the SAP adapter as .NET methods. You can use the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK or the svcutil.exe tool to create a WCF client class from metadata exposed by the SAP adapter. For more information about WCF service model programming and the SAP adapter, see Developing Applications by Using the WCF Service Model.

• Through a BizTalk port that is configured to use the BizTalk WCF-Custom adapter with the SAP Binding configured as the binding for the WCF-Custom transport type in a BizTalk Server application. The BizTalk WCF-Custom adapter enables communication between a BizTalk Server application and a WCF service.

The BizTalk WCF-Custom adapter supports custom WCF bindings through its WCF-Custom transport type, which enables you to configure any WCF binding exposed to the configuration system as the binding used by the BizTalk WCF-Custom adapter. For more information about how to use the SAP adapter in BizTalk Server solutions, see Developing BizTalk Applications.

BizTalk transactions are supported by the BizTalk Layered Channel binding element which can be loaded by setting a binding property on the SAP Binding.

• Through an IIS-hosted Web service. In this scenario, the SAP adapter is exposed through a WCF Service proxy, which is hosted in IIS by using one of the standard WCF HTTP bindings.

• Through the .NET Framework Data Provider for mySAP Business Suite. The Data Provider for SAP runs on top of the SAP adapter and provides an ADO.NET interface to an SAP system.

The SAP adapter and the SAP RFC library are always hosted in-process with the application or service that consumes the adapter.

The SAP Adapter and WCF

WCF presents a programming model based on the exchange of SOAP messages over channels between clients and services. These messages are sent between endpoints exposed by a communicating client and service.

An endpoint consists of an endpoint address which specifies the location at which messages are received, a binding which specifies the communication protocols used to exchange messages, and a contract which specifies the operations and data types exposed by the endpoint. A binding consists of one or more binding elements that stack on top of each other to define how messages are exchanged with the endpoint.

At a minimum, a binding must specify the transport and encoding used to exchange messages with the endpoint. Message exchange between endpoints occurs over a channel stack that is composed of one or more channels. Each channel is a concrete implementation of one of the binding elements in the binding configured for the endpoint.

For more information about WCF and the WCF programming model, see “Windows Communication Foundation” at http://go.microsoft.com/fwlink/?LinkId=89726.

The Microsoft BizTalk Adapter for mySAP Business Suite exposes a WCF custom binding, the SAP Binding (Microsoft.Adapters.SAP.SAPBinding). By default, this binding contains a single custom transport binding element, the SAP Adapter Binding Element (Microsoft.Adapters.SAP.SAPAdapter), which enables operations on an SAP system. When using the SAP adapter with BizTalk Server, you can set the EnableBizTalkCompatibilityMode binding property to load a custom binding element, the BizTalk Layered Channel Binding Element, on top of the SAP Adapter Binding Element. The BizTalk Layered Channel Binding Element is implemented internally by the SAP adapter and is not exposed outside the SAP Binding.

Microsoft.Adapters.SAP.SAPBinding (the SAP Binding) and Microsoft.Adapters.SAP.SAPAdapter (the SAP Adapter Binding Element) are public classes and are also exposed to the configuration system. Because the SAP Adapter Binding Element is exposed publicly, you can build your own custom WCF bindings capable of extending the functionality of the SAP adapter. For example, you could implement a custom binding to support Enterprise Single Sign-On (SSO) in a WCF channel or a WCF service model programming solution, to aggregate database operations into a single multifunction operation, or to perform schema transformation between operations implemented by a custom application and operations on the SAP system.

The SAP adapter is built on top of the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK and runs on top of the WCF LOB Adapter SDK runtime. The WCF LOB Adapter SDK provides a software framework and tooling infrastructure that the SAP adapter leverages to provide a rich set of features to users and adapter clients.

The Connection to the SAP System

The SAP adapter connects with the SAP system through the SAP Unicode RFC SDK Library (librfc32u.dll). The SAP adapter supports both the 32 bit and the 64 bit versions of the SAP RFC SDK. The SAP RFC SDK enables external programs to call ABAP functions on a SAP system.

You establish a connection to an SAP system by providing a connection URI to the SAP adapter. The SAP adapter supports the following kinds of connections to an SAP system:
• An application host–based connection (A), in which the SAP adapter connects directly to an SAP application server.

• A load balancing connection (B), in which the SAP adapter connects to an SAP messaging server.

• A destination-based connection (D), in which the connection to the SAP system is specified by a destination in the saprfc.ini configuration file. A, B, and R type connections are supported.

• A listener connection (R), in which the adapter receives RFCs, tRFC and IDOCs through an RFC Destination on the SAP system that is specified by a listener host, a listener gateway service, and a listener program ID, either directly in the connection URI or by an R-based destination in the saprfc.ini configuration file.

So – How Do I Use a Custom Web Part?

This topic has not yet been rated – Rate this topic

This section provides information about using a custom Web Part with Microsoft Office SharePoint Server. To use a custom Web Part, you must do the following:
1. Create a custom Web Part

  1. Deploy the custom Web Part to a SharePoint portal
  2. Configure the SharePoint portal to use the custom Web Part

Before You Begin

Before you create a custom Web Part:
• Publish the SAP artifacts as a WCF service. For more information, see Step 1: Publish the SAP Artifacts as a WCF Service in Tutorial 1: Presenting Data from an SAP System on a SharePoint Site.

• Create an application definition file for the SAP artifacts using the Business Data Catalog in Microsoft Office SharePoint Server. For more information, see Step 2: Create an Application Definition File for the SAP Artifacts in Tutorial 1: Presenting Data from an SAP System on a SharePoint Site.

Step 1: Create a custom Web Part

To create a custom Web Part using Visual Studio, do the following:
1. Start Visual Studio, and then create a project.

  1. In the New Project dialog box, from the Project types pane, select Visual C#. From the Templates pane, select Class Library.
  • Specify a name and location for the solution. For this topic, specify CustomWebPart in the Name and Solution Name boxes. Specify a location, and then click OK.

  • Add a reference to the System.Web component into the project. Right-click the project name in Solution Explorer, and then click Add Reference. In the Add Reference dialog box, select System.Web in the .NET tab, and then click OK. The System.Web component contains the required namespace of System.Web.UI.WebControls.WebParts.

  • Add the required code based on your issue in the project. For the code sample that is relevant to a certain issue, see “Issues Involving Custom Web Parts” in Considerations While Using the SAP Adapter with Microsoft Office SharePoint Server.

  • Build the project. On successful build of the project, a .dll file, CustomWebPart.dll, will be generated in the /bin/Debug folder.

  • Only for 64-bit computer: Sign the CustomWebPart.dll file with a strong name before performing the following steps. Otherwise, you will not be able to import, and hence use the CustomWebPart.dll in the SharePoint portal in “Step 3: Configure the SharePoint Portal to use the custom Web Part.” For information about how to sign an assembly with a strong name, see http://go.microsoft.com/fwlink/?LinkId=197171.

  • Step 2: Deploy the custom Web Part to a SharePoint Portal

    You must do the following to make the CustomWebPart.dll file (custom Web Part) that is created in “Step 1: Create a custom Web Part” of this topic usable on the SharePoint portal:
    • Copy the CustomWebPart.dll file to the bin folder of the SharePoint Portal: Microsoft Office SharePoint Server creates portals under the :\Inetpub\wwwroot\wss\VirtualDirectories folder. A folder is created for each portal, and can be identified with the port number. You must copy the CustomWebPart.dll file created in “Step 1: Create a custom Web Part” of this topic to the :\Inetpub\wwwroot\wss\VirtualDirectories\bin folder. For example, if the port number of your SharePoint portal is 13614, you must copy the CustomWebPart.dll file to the :\Inetpub\wwwroot\wss\VirtualDirectories\13614\bin folder.

    TipTip

    Another way to find the folder location of your SharePoint portal is by using the Internet Information Services (IIS) Manager window (Start > Run > inetmgr). Locate your SharePoint portal in the Internet Information Services (IIS) Manager window ([computer_name] > Web Sites > [Portal-Name]), right-click, and then click Properties in the shortcut menu. In the properties dialog box of the SharePoint portal, click the Home Directory tab, and then select the Local path box.

    • Add the Safe Control Entry in the web.config File: Because the CustomWebPart.dll file will be used on different computers and by multiple users, you must declare the file as “safe.” To do so, open the web.config file located in the SharePoint portal folder at :\Inetpub\wwwroot\wss\VirtualDirectories. Under the section of the web.config file, add the following safe control entry:

    ◦On 32-bit computer:

    Copy

     

    ◦On 64-bit computer:

    Copy

     

    Save the web.config file, and then close it.

    Step 3: Configure the SharePoint portal to use the custom Web Part

    You need to add the custom Web Part to the Microsoft Office SharePoint Server Web Part Gallery, so that you can use it on your SharePoint portal. To do so:

    1. Start SharePoint 3.0 Central Administration. Click Start, point to All Programs, point to Microsoft Office Server, and then click SharePoint 3.0 Central Administration.
  • In the left navigation pane, click the name of the Shared Service Provider (SSP) to which you want to add the custom Web Part.

  • On the Shared Services Administration page, in the upper-right corner, click Site Actions, and then click Create.

  • On the Site Settings page, click Web Parts under the Galleries column.

  • On the Web Part Gallery page, to add the custom Web Part to the gallery, click New. At this point the custom Web Part is not available in the Web Part Gallery page.

  • On the New Web Parts page, locate CustomWebPart (name of the custom Web Part) in the list, select the check box on the left, and then click Populate Gallery on the top of the page. This will add the CustomWebPart entry in the Web Part Gallery page.

  • Now you can use the custom Web Part (CustomWebPart) to create Web Parts in your SharePoint portal. The custom Web Part (CustomWebPart) will appear under the Miscellaneous section in the Add Web Parts page.

     

    Expand

    BizTalk Adapter for mySAP Business Suite and the WCF LOB Adapter SDK

    This topic has not yet been rated Rate this topic

    The Microsoft BizTalk Adapter for mySAP Business Suite implements a set of core components that leverage functionality provided by the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK and provide connectivity to the SAP system through the SAP Unicode RFC SDK Library (librfc32u.dll).

    The WCF LOB Adapter SDK serves as the software layer through which the SAP adapter interfaces with Windows Communication Foundation (WCF), and the RFC SDK serves as the layer through which the SAP adapter interfaces with the SAP system.

    The following figure shows the relationships between the internal components of the SAP adapter and between these components and the RFC SDK.

    The relationship of internal adapter components

    See

     

    How to Create Custom SharePoint 2010 Page Layouts using SharePoint Designer 2010

    Scenario:

    You’re working with the Enterprise Wiki Site Template and you don’t really like where the “Last modified…” information is located (above the content). You want to move that information to the bottom of the page.

    Option 1: Modify the “EnterpriseWiki.aspx” Page Layout directly.

    Option 2: Create a new Page Layout based on the original one and then modify that one.

    We’ll go ahead and go with Option 2 since we don’t want to modify the out of the box template just in case we need it later on.

    How To:

    Step 1

    Navigate to the top level site of the Site Collection > Site Actions > Site Settings > Master pages (Under the Galleries section). Then switch over to the Documents tab in the Ribbon and then click New > Page Layout.

    Step 2

    Select the Enterprise Wiki Page Content Type to associate with, give it a URL and Title. Note that there’s also a link on this page to create a new Content Type. You might be interested in doing this if you wanted to say, add more editing fields or metadata properties to the layout. For example if you wanted to add another Managed Metadata column to capture folksonomy aside from the already included “Wiki Categories” Managed Metadata column.

    Step 3

    SharePoint Designer time! Hover over your newly created Page Layout and “Edit in Microsoft SharePoint Designer.”

    Step 4

    Now you can choose to build your page manually by dragging your SharePoint Controls onto the page and laying them out as you’d like…

    … Or you can copy and paste the OOB Enterprise Wiki Page Layout. I think I’ll do that. 🙂

    Step 5

    Alright, so you’ve copied the contents of the EnterpriseWiki.aspx Page Layout and now it’s time for some customizing. I found the control I want to move, so I’ll simply do a copy or cut/paste to the new spot.

    Step 6

    Check-in, publish, and approve the new Page Layout. Side note: I like to add the Check-In/Check-Out/Discard or Undo-Checkout buttons to all of my Office Applications’ Quick Access Toolbars for convenience.

    Step 7

    Almost there! Navigate to your publishing site, in this case the Enterprise Wiki Site, then go to Site Actions > Site Settings > Page layouts and site templates (Under Look and Feel). Here you’ll be able to make the new Page Layout available for use within the site.

    Step 8

    Go back to your site and edit the page that you’d like to change the layout for. On the Page tab of the Ribbon, click on Page Layout and select your custom Page Layout.

    Et voila! You just created a custom Page Layout using SharePoint Designer 2010, re-arranged a SharePoint control and managed to plan for the future by not modifying the out of the box template. That was a really simple example but I hope it helped to give you some ideas on how else you can customize Page Layouts within SharePoint 2010!

    2132

    Visio for Developers in Office 365

    In this post, I’ll introduce some of the new features of interest to developers in Visio 2013. Among these features are:

    • New file format
    • Robust updates to themes
    • The change shape feature (that allows you to replace one shape with another while Maintaining shape text)
    • New shape effects
    • Improvements to commenting
    • Coauthoring on SharePoint Server 2013
    • Customizable image clipping
    • Relative geometry
    • Support for Business Connectivity Services (BCS) data
    • Updates to Visio Services in Microsoft SharePoint Server 2013
    • Duplicate page feature

    At the end of the post, I provide you with some additional resources for both Visio and general Office development.

     

    New file format

    Visio 2013 introduces a new file format, based on the Open Packaging Conventions (OPC) standard (ISO 29500, Part 2) and the XML elements from the previous Visio XML file format (.vdx). It is a zipped, XML-based file format similar to the file formats used in other applications.

    Because the new file format is supported by both Visio 2013 and Visio Services in Microsoft SharePoint Server 2013, you can save a Visio drawing directly to a SharePoint Server library without having to first publish the file as a Visio Web Drawing (.vdw). Even so, Visio Services can still read and display Visio Web Drawing files.

    The new file format includes the following file types (by extension):

    • .vsdx (Visio drawing)
    • .vsdm (Visio macro-enabled drawing)
    • .vssx (Visio stencil)
    • .vssm (Visio macro-enabled stencil)
    • .vstx (Visio template)
    • .vstm (Visio macro-enabled template)

    By using existing support for reading and writing to the file format package (such as System.IO.Packaging) and for parsing XML (System.Xml.Linq), you can work with the new file formats programmatically.

    Visio 2013 retains the ability to read the old file formats (.vsd, .vss, .vst, .vdx, .vsx, .vtx, .vdw, .vwi). Visio 2013 does not save to the previous Visio XML file format (.vdx). Solutions or tools that consume the previous Visio XML file format (.vdx) files may need to be refactored in order to read the new file format and its schemas.

    Visio Services retains the ability to display the Visio Web Drawing (.vdw) format in the browser. It now also renders the new Visio drawing (.vsdx) and Visio macro-enabled drawing (.vsdm) formats.

    For more information about the new file format, see the article How to: Manipulate the Visio 2013 file format programmatically.

    Themes

    Themes have been redesigned in Visio 2013, making use of a greater variety of effects and styles including the integration of Shape Art effects. Users can now decide on an overarching style by applying a theme, personalize the diagram with theme variants, and highlight individual shapes with Quick Styles. ShapeSheet developers can take advantage of these features with new functions and cells in the ShapeSheet.

    The user interface for applying theme variants is shown in the following figure.

     

     

    You can also manipulate themes at the Page, Shape, and Selection object level. New APIs for working with themes include Page.SetTheme method, Page.SetThemeVariant method, Shape.SetQuickStyle method, and the Selection.SetQuickStyle method.

    For more information about new VBA objects and members in Visio 2013, see the Visio Automation reference. For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

    Change Shape

    Visio 2013 includes a shape replacement API that enables you to swap one or more shapes for another shape contained in a stencil, while retaining some of the local values from the original shape, like the shape text shape, shape data, or shape formatting. Shape developers can update the ShapeSheet settings of their custom shapes to specify the Change Shape behavior for their shapes. Among the new APIs for Change Shape are the Shape.ReplaceShape and Selection.ReplaceShape methods and the ReplaceShapesEvent object.

    The Change Shape feature lets you easily change a shape (in this case, the green rectangle)…

     

    …to another shape, the green diamond.

    For more information about the Change Shape feature, see Eric Schmidt’s blog post, Change shapes in Visio 2013.

    For more information about new VBA objects and members in Visio 2013, see the Visio Automation reference. For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

    Shape effects

    New shape effects such as bevel, 3-D rotation, glow, reflection, and sketching have been added to Visio 2013. The ShapeSheet includes new cells for working with these effects. The following figure shows a shape to which effects have been applied.

    You can also use Office VBA objects such as TextFrame2, GlowFormat, and ReflectionFormat and their members to apply shape effects.

    For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

    Commenting

    Visio 2013 includes a new commenting framework. Comments can now be associated with a particular shape or page. Visio 2013 includes two new objects, Comments and Comment. New APIs for accessing comments programmatically include the Document.Comments, Page.Comments, Shape.Comments, and Page.ShapeComments properties.

    The following images show what comments looked like in Visio 2010 and what they look like in Visio 2013.

     

     

    Visio Services includes JavaScript APIs to read the comments from a page or shape in a diagram.

    Note: You can no longer access comments in the ShapeSheet.

    Coauthoring

    Visio 2013 includes the ability to co-author diagrams stored on SharePoint or OneDrive. Developers have access to the Document.AfterDocumentMerge event which provides information about diagram changes due to coauthoring. Solution developers also have the ability to disable coauthoring to suit their custom needs by using the NoCoauth cell on the Document ShapeSheet.

    Customizable image clipping

    Visio 2013 supports defining a Custom Image Clipping path to crop images to any shape. This extends the capacities of Visio 2010, which supported clipping images in a rectangular way. This functionality is available in the ShapeSheet by using the ClippingPath cell in the Foreign Image Info section.

    Relative geometries

    In previous versions of Visio, shape geometry was defined by formulas that depended on the height or width of the shape. For example, in Visio 2010 the vertices of many built-in Visio shapes were defined by multiplying the height or width of the shape by a constant. These shapes had Geometry sections that included MoveTo or LineTo rows (for example) with formulas like Width1 and Height0.

    Visio 2013 now supports relative geometry in the ShapeSheet. Shape developers can now use relative geometries to specify geometries as simple values or formulas, which multiply by the height or width automatically. You can now express Shape vertices by using constants, for instance—you no longer need to express vertices as multiples of the shape width or height. This makes it easier for you to create shapes that have better performance and smaller file sizes. New rows include the RelMoveTo and RelLineTo rows where the X and Y cell values are automatically multiplied by the width or height of the shape (respectively).

    Support for Business Connectivity Services (BCS) data

    Visio 2013 diagrams can now be connected to external lists on SharePoint Server 2013 servers. An external list is a content source external to SharePoint (for example, a SQL Server table) that has been connected to a SharePoint list by using Microsoft Business Connectivity Services (BCS). Visio Services supports the ability to refresh the Visio diagrams as the data updates.

    For more information about what’s new in Visio Services, see the article Visio Services in SharePoint 2013. For more information about Business Connectivity Services (BCS), see Business Connectivity Services in SharePoint 2013.

    Improvements in Visio Services

    Visio Services in Microsoft SharePoint Server 2013 includes many improvements. As mentioned previously, Visio Services supports the new Visio file format (.vsdx and .vsdm). Visio Services has expanded data refresh and recalculation, including the ability to recalculate formulas across an entire diagram.

    For more information about what’s new in Visio Services, see the article Visio Services in SharePoint 2013.

    Duplicate page

    You can now copy a page and all of its shapes within the same document in Visio 2013. Accordingly, the Page object has a new method, Duplicate, which duplicates the page and returns a new Page object.

    Additional resources

    Great tool to handle storing constants in SharePoint Development

     

     

    Today I want to introduce something I’ve been working on recently which could be of use to you if you’re a SharePoint developer. Often when developing SharePoint solutions which require coding, the developer faces a decision about what to do with values he/she doesn’t want to ‘hardcode’ into the C# or VB.Net code. Example values for a SharePoint site/application/control could be:

    ‘AdministratorEmail’ – ‘bob@somewhere.com’
    ‘SendWorkflowEmails’ – ‘true’

    Generally we avoid hardcoding such values, since if the value needs to be changed we have to recompile the code, test and redeploy. So, alternatives to hardcoding which folks might consider could be:

    • store values in appSettings section of web.config
    • store values in different places, e.g. Feature properties, event handler registration data, etc.
    • store values in a custom SQL table
    • store values in a SharePoint list

    Personally, although I like the facility to store complex custom config sections in web.config, I’m not a big fan of appSettings. If I need to change a value, I have to connect to the server using Remote Desktop and open and modify the file – if I’m in a server farm with multiple front-ends, I need to repeat this for each, and I’m also causing the next request to be slow because the app domain will unload to refresh the config. Going through the other options, the second isn’t great because we’d need to be reactivating Features/event receiver registrations every time (even more hassle), though the third (using SQL) is probably fine, unless I want a front-end to make modifying the values easier, in which case we’d have some work to do.

    So storing config values in a SharePoint list could be nice, and is the basis for my solution. Now I’d bet lots of people are doing this already – it’s pretty quick to write some simple code to fetch the value, and this means we avoid the hardcoding problem. However, the Config Store ‘framework’ goes further than this – for one thing it’s highly-optimized so you can be sure there is no negative performance impact, but there are also some bonuses in terms of where it can used from and the ease of deployment. So what I hope to show is that the Config Store takes things quite a bit further than just a simple implementation of ‘retrieving config values from a list’.

    Details (reproduced from the Codeplex site)

    The list used to store config items looks like this (N.B. the items shown are my examples, you’d add your own):

    ConfigStoreList.jpg
    There is a special content type associated with the list, so adding a new item is easy:

    ConfigItemContentType.jpg

    ..and the content type is defined so that all the required fields are mandatory:

    AddNewConfigItem.jpg

    Retrieving values

    Once a value has been added to the Config Store, it can be retrieved in code as follows (you’ll also need to add a reference to the Config Store assembly and ‘using’ statement of course):

    string sAdminEmail = ConfigStore.GetValue("MyApplication", "AdminEmail");

    Note that there is also a method to retrieve multiple values with a single query. This avoids the need to perform multiple queries, so should be used for performance reasons if your control/page will retrieve many items from the Config Store – think of it as a best practise. The code is slightly more involved, but should make sense when you think it through. We create a generic List of ‘ConfigIdentifiers’ (a ConfigIdentifier specifies the category and name of the item e.g. ‘MyApplication’, ‘AdminEmail’) and pass it to the ‘GetMultipleItems()’ method:

    List<ConfigIdentifier> configIds = new List<ConfigIdentifier>();

    ConfigIdentifier adminEmail = new ConfigIdentifier("MyApplication", "AdminEmail");
    ConfigIdentifier sendMails = new ConfigIdentifier("MyApplication", "SendWorkflowEmails");
    configIds.Add(adminEmail);
    configIds.Add(sendMails);
    Dictionary<ConfigIdentifier, string> configItems = ConfigStore.GetMultipleValues(configIds);
    string sAdminEmail = configItems[adminEmail];
    string sSendMails = configItems[sendMails];

    ..the method returns a generic Dictionary containing the values, and we retrieve each one by passing the respective ConfigIdentifier we created earlier to the indexer.

    Other notes

      • All items are wrapped up in a Solution/Feature so there is no need to manually create site columns/content types/the Config Store list etc. There is also an install script for you to easily install the Solution.

        • Config items are cached in memory, so where possible there won’t even be a database lookup!

          • The Config Store is also designed to operate where no SPContext is present e.g. a list event receiver. In this scenario, it will look for values in your SharePoint web application’s web.config file to establish the URL for the site containing the Config Store (N.B. these web.config keys get automatically added when the Config Store is installed to your site). This also means it can be used outside your SharePoint application, e.g. a console app.

            • The Config Store can be moved from it’s default location of the root web for your site. For example sites I create usually have a hidden ‘config’ web, so I put the Config Store in here, along with other items. (To do this, create a new list (in whatever child web you want) from the ‘Configuration Store list’ template (added during the install), and modify the ‘ConfigWebName’/’ConfigListName’ keys which were added to your web.config to point to the new location. As an alternative if you already added 100 items which you don’t want to recreate, you could use my other tool, the SharePoint Content Deployment Wizard at http://www.codeplex.com/SPDeploymentWizard to move the list.)

              • All source code and Solution/Feature files are included, so if you want to change anything, you can

                • Installation instructions are in the readme.txt in the download

                Summary

                Hopefully you might agree that the Config Store goes beyond a simple implementation of ‘storing config values in a list’. For me, the key aspects are the caching and the fact that the entire solution is ‘joined up’, so it’s easy to deploy quickly and reliably as a piece of functionality. Many organizations are probably at the stage where their SharePoint codebase is becoming mature and perhaps based on a foundation of a ‘core API’ – the Config Store could provide a useful piece in such a toolbox.

                You can download the Config Store framework and all the source code from www.codeplex.com/SPConfigStore.

                How To : Use Powershell Scripts in Office 365 through the SharePoint CSOM

                When we first started to work with Office 365, I remember being quite concerned at the lack of PowerShell cmdlets – basically all the commands we’re used to using do not exist there. Here’s a gratuitous graph to illustrate the point:

                image

                So yes, nearly 800 PowerShell commands in SP2013 (up from around 530 in SP2010) down to a measly 30 in SharePoint Online. And those 30 mainly cover basic operations with sites, users and permissions – no scripting of, say, Managed Metadata, user profiles, search and so on. It’s true to say that some of these things are now available down at site-collection scope (needed, of course, when you don’t have a true “Central Admin” site but there are still “tenant-level” settings that you want to use script for rather than make manual changes through the UI.

                So what’s a poor developer/administrator to do?

                The answer is to write PowerShell as you always did, but embed CSOM code in there. More examples later, but here’s a small illustration:

                # get the site collection scoped Features collections (e.g. to activate one) – not showing how to obtain $clientContext here..
                $siteFeatures = $clientContext.Site.Features
                $clientContext.Load($siteFeatures)
                $clientContext.ExecuteQuery()

                So we’re using the .NET CSOM, but instead of C# we are using PowerShell’s ability to call any .NET object (indeed, nearly every script will use PowerShell’s New-Objectcommand). All the things we came to love about PowerShell are back on the table:

                • Scripts can be easily amended, no need to recompile (or open Visual Studio)
                • We can debug with PowerGui or PowerShell ISE
                • We can leverage other things PowerShell is good at e.g. easily reading from XML files, using other PowerShell modules and other APIs (including .NET) etc.

                Of course, we can only perform operations where the method exists in the .NET CSOM – that’s the boundary of what we can do.

                Getting started

                Step 1 – understand the landscape

                The first thing to understand is that there are actually 3 different approaches for scripting against Office 365/SharePoint Online, depending on what you need to do. It might just be me, but I think that when you start it’s easy to get confused between them, or not fully appreciate that they all exist. The 3 approaches I’m thinking of are:

                • SharePoint Online cmdlets
                • MSOL cmdlets
                • PowerShell + CSOM

                This post focuses on the last flavor. I also wrote a short companion post about the overall landscape and with some details/examples on the other flavors, at Using SharePoint Online and MSOL cmdlets in PowerShell with Office 365

                Step 2 – prepare the machine you will run scripts against SharePoint Online

                Option 1 – if you will NOT run scripts from a SP2013 box (e.g. a SP2013 VM):

                You need to obtain the SharePoint DLLs which comprise the .NET CSOM, and copy them to a folder on your machine – your scripts will reference these DLLs.

                1. Go to any SharePoint 2013 server, and copy any DLL
                2. which starts with Microsoft.SharePoint.Client*.dll from the C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI folder.
                3. Store them in a folder on your machine e.g. C:\Lib – make a note of this location.

                CSOM DLLs

                Option 2 – if you WILL run scripts from a SP2013 box (e.g. a SP2013 VM):

                In this case, there is no need to copy the DLLs – your scripts will reference them in the original SharePoint install location (C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI).

                The top of your script – referencing DLLs and authentication

                Each .ps1 file which calls the SharePoint CSOM needs to deal with two things before you can use the API – loading the CSOM types, and authenticating/obtaining a ClientContext object. So, you’ll need this at the top of your script:

                ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
                # replace these details (also consider using Get-Credential to enter password securely as script runs)..
                $username = “SomeUser@SomeOrg.onmicrosoft.com”
                $password = “SomePassword”
                $securePassword = ConvertTo-SecureString $Password -AsPlainText -Force
                # the path here may need to change if you used e.g. C:\Lib..
                Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll”
                Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll”
                # note that you might need some other references (depending on what your script does) for example:
                Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Taxonomy.dll”
                # connect/authenticate to SharePoint Online and get ClientContext object..
                $clientContext = New-Object Microsoft.SharePoint.Client.ClientContext($url)
                $credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $securePassword)
                $clientContext.Credentials = $credentials
                if (!$clientContext.ServerObjectIsNull.Value)
                {
                Write-Host “Connected to SharePoint Online site: ‘$Url'” -ForegroundColor Green
                }
                view rawTopOfScript_PSCSOM.ps1 hosted with ❤ by GitHub

                In the scripts which follow, we’ll include this “top of script” stuff by dot-sourcing TopOfScript.ps1 in every script below – you could follow a similar approach (perhaps with a different name!) or simply paste that stuff into every script you create. If you enter a valid set of credentials and URL, running the script above should see you ready to rumble:

                PS CSOM got context

                Script examples

                Activating a Feature in SPO

                Something you might want to do at some point is enable or disable a Feature using script. The script below, like the others that follow it, all reference my TopOfScript.ps1 script above:

                ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
                . .\TopOfScript.ps1
                [bool]$enable = $true
                [bool]$force = $false
                # using the Minimal Download Strategy Feature here..
                $FeatureId = [GUID](“87294C72-F260-42f3-A41B-981A2FFCE37A”)
                # ..and working with the web-scoped Features – use $clientContext.Site.Features for site-scoped Features
                $webFeatures = $clientContext.Web.Features
                $clientContext.Load($webFeatures)
                $clientContext.ExecuteQuery()
                if ($enable)
                {
                $webfeatures.Add($featureId, $force, [Microsoft.SharePoint.Client.FeatureDefinitionScope]::None)
                }
                else
                {
                $webfeatures.Remove($featureId, $force)
                }
                try
                {
                $clientContext.ExecuteQuery()
                if ($enable)
                {
                Write-Host “Feature ‘$FeatureId’ successfully activated..”
                }
                else
                {
                Write-Host “Feature ‘$FeatureId’ successfully deactivated..”
                }
                }
                catch
                {
                Write-Error “An error occurred whilst activating/deactivating the Feature. Error detail: $($_)
                }
                view rawActivateOrDeactivateFeature_PSCSOM.ps1 hosted with ❤ by GitHub

                PS CSOM activate feature

                Enable side-loading (for app deployment)

                Along very similar lines (because it also involves activating a Feature), is the idea of enabling “side-loading” on a site. By default, if you’re developing a SharePoint app it can only be F5 deployed from Visual Studio to a site created from the Developer Site template, but by enabling “side-loading” you can do it on (say) a team site too. Since the Feature isn’t visible (in the UI), you’ll need a script like this:

                ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
                . .\TopOfScript.ps1
                [bool]$enable = $true
                [bool]$force = $false
                # this is the side-loading Feature ID..
                $FeatureId = [GUID](“AE3A1339-61F5-4f8f-81A7-ABD2DA956A7D”)
                # ..and this one is site-scoped, so using $clientContext.Site.Features..
                $siteFeatures = $clientContext.Site.Features
                $clientContext.Load($siteFeatures)
                $clientContext.ExecuteQuery()
                if ($enable)
                {
                $siteFeatures.Add($featureId, $force, [Microsoft.SharePoint.Client.FeatureDefinitionScope]::None)
                }
                else
                {
                $siteFeatures.Remove($featureId, $force)
                }
                try
                {
                $clientContext.ExecuteQuery()
                if ($enable)
                {
                Write-Host “Feature ‘$FeatureId’ successfully activated..”
                }
                else
                {
                Write-Host “Feature ‘$FeatureId’ successfully deactivated..”
                }
                }
                catch
                {
                Write-Error “An error occurred whilst activating/deactivating the Feature. Error detail: $($_)
                }
                view rawEnableSideLoading_PSCSOM.ps1 hosted with ❤ by GitHub

                PS CSOM enable side loading

                Iterating webs

                Sometimes you might want to loop through all the webs in a site collection, or underneath a particular web:

                ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
                1234567891011121314151617181920
                . .\TopOfScript.ps1
                $rootWeb = $clientContext.Web
                $childWebs = $rootWeb.Webs
                $clientContext.Load($rootWeb)
                $clientContext.Load($childWebs)
                $clientContext.ExecuteQuery()
                function processWeb($web)
                {
                $lists = $web.Lists
                $clientContext.Load($web)
                $clientContext.ExecuteQuery()
                Write-Host “Web URL is” $web.Url
                }
                foreach ($childWeb in $childWebs)
                {
                processWeb($childWeb)
                }
                view rawIterateWebs.ps1 hosted with ❤ by GitHub

                PS CSOM iterate webs

                (Worth noting that you also see SharePoint-hosted app webs also in the image above, since these are just subwebs (albeit ones which get accessed on the app domain URL rather than the actual host site’s web application URL).

                Iterating webs, then lists, and updating a property on each list

                Or how about extending the sample above to not only iterate webs, but also the lists in each – the property I’m updating on each list is the EnableVersioning property, but you easily use any other property or method in the same way:

                ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
                . .\TopOfScript.ps1
                $enableVersioning = $true
                $rootWeb = $clientContext.Web
                $childWebs = $rootWeb.Webs
                $clientContext.Load($rootWeb)
                $clientContext.Load($childWebs)
                $clientContext.ExecuteQuery()
                function processWeb($web)
                {
                $lists = $web.Lists
                $clientContext.Load($web)
                $clientContext.Load($lists)
                $clientContext.ExecuteQuery()
                Write-Host “Processing web with URL “ $web.Url
                foreach ($list in $web.Lists)
                {
                Write-Host “– “ $list.Title
                # leave the “Master Page Gallery” and “Site Pages” lists alone, since these have versioning enabled by default..
                if ($list.Title -ne “Master Page Gallery” -and $list.Title -ne “Site Pages”)
                {
                Write-Host “—- Versioning enabled: “ $list.EnableVersioning
                $list.EnableVersioning = $enableVersioning
                $list.Update()
                $clientContext.Load($list)
                $clientContext.ExecuteQuery()
                Write-Host “—- Versioning now enabled: “ $list.EnableVersioning
                }
                }
                }
                foreach ($childWeb in $childWebs)
                {
                processWeb($childWeb)
                }
                view rawIterateWebsAndListsEnableVersioning.ps1 hosted with ❤ by GitHub

                PS CSOM iterate lists enable versioning

                Import search schema XML

                In SharePoint 2013 and Office 365, many aspects of search configuration (such as Managed Properties and Crawled Properties, Query Rules, Result Sources and Result Types) can be exported and importing between environments as an XML file. The sample below shows the import operation handled with PS + CSOM:

                ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
                . .\TopOfScript.ps1
                # need some extra types bringing in for this script..
                Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Search.dll”
                # TODO: replace this path with yours..
                $pathToSearchSchemaXmlFile = “C:\COB\Cloud\PS_CSOM\XML\COB_TenantSearchConfiguration.xml”
                # we can work with search config at the tenancy or site collection level:
                #$configScope = “SPSiteSubscription”
                $configScope = “SPSite”
                $searchConfigurationPortability = New-Object Microsoft.SharePoint.Client.Search.Portability.SearchConfigurationPortability($clientContext)
                $owner = New-Object Microsoft.SharePoint.Client.Search.Administration.SearchObjectOwner($clientContext, $configScope)
                [xml]$searchConfigXml = Get-Content $pathToSearchSchemaXmlFile
                $searchConfigurationPortability.ImportSearchConfiguration($owner, $searchConfigXml.OuterXml)
                $clientContext.ExecuteQuery()
                Write-Host “Search configuration imported” -ForegroundColor Green
                view rawImportSearchSchema.ps1 hosted with ❤ by GitHub

                PS CSOM import search schema

                Summary

                As you can hopefully see, there’s lots you can accomplish with the PowerShell and CSOM combination. Anything that can be done with CSOM API can be wrapped into a script, and you can build up a library of useful PowerShell snippets just like the old days. There are some interesting things that you CANNOT do with CSOM (such as automating the process of uploading/deploying a sandboxed WSP to Office 365), but there ARE approaches for solving even these problems, and I’ll most likely cover this (and our experiences) in future posts.

                A final idea on the PowerShell + CSOM front is the idea that you can have “hybrid” scripts which can deal with both SharePoint Online and on-premises SharePoint. For example, on my current project everything we build must be deployable to both SPO and on-premises, and our scripts take a “DeploymentTarget” parameter where the values can be “Online” or “OnPremises”. There are some differences (i.e. branching) in the scripts, but for many operations the same commands can be run.

                Introduction to the Microsoft Monitoring Agent

                You can now download the Microsoft Monitoring Agent and install on any server in your enterprise that meet the minimum installation requirements.

                The Microsoft Monitoring Agent is a simple installation that is included with System Center Operations Manager 2012 R2 or can be installed separately to be used in a standalone manner. When using the Microsoft Monitoring Agent as a standalone tool the data captured is available as a Visual Studio IntelliTrace file. These files can be opened with Visual Studio Ultimate 2013 Release Candidate.

                The rest of this post will focus on using the Microsoft Monitoring Agent in a standalone mode.

                Using MMA in standalone mode

                After installing the Microsoft Monitoring Agent you enable monitoring of your IIS hosted application using PowerShell commands. Let’s say that I have installed my application (FabrikamFiber) under the default application node on my IIS server as shown below.

                To start monitoring this application I use the following PowerShell command running as Administrator:

                A couple of things to notice:

                • The name of the application that I am monitoring includes the site name (Default Web Site) as well as the application name (FabrikamFiber.Web).
                • I chose the monitor mode. My options are to use monitor, trace, or custom. I will focus on the monitor mode in this blog post.
                • The output path is a file location that I have already created. You may wish to make this location shared so that both developers and operations team members can access the files that get created in that location.

                Monitoring mode has been designed to have minimal impact on the running application and only records data when the Microsoft Monitoring Agent detects undesirable behavior such as problematic exceptions occur or performance is poor. The default settings record exceptions that bubble up to the global exception handler and pages that  take longer than 5 seconds to respond from the server. In addition to the conditions that trigger data collection the Microsoft Monitoring Agent will also limit the frequency in which it will record data to a maximum of 60 similar events per day.

                There are a couple of approaches that you can take to use the Microsoft Monitoring Agent in monitoring mode effectively. One method is to gather an IntelliTrace file after a problem has been reported. A more proactive approach is to gather an IntelliTrace file on a regular schedule, such as daily, to check for problematic behavior in your application that may not have been reported by your users.

                Generating and Using the IntelliTrace file

                To produce an IntelliTrace file you use the Checkpoint-WebApplicationMonitoring command as shown below:

                The IntelliTrace file can then be opened using Visual Studio 2013 Release Candidate. Any performance violations are listed in the Performance Data section of the IntelliTrace Summary page and exceptions are shown in the Exceptions section.

                Diagnosing exceptions remains very similar to the experience available with Visual Studio 2012. Diagnosing performance events is detailed in a separate blog post.

                For detailed information on the Microsoft Monitoring Agent refer to TechNet.

                For expanded scenarios and usage information refer to MSDN.

                Client-side PowerShell for SharePoint Online and Office 365

                SharePoint PowerShell is a PowerShell API for SharePoint 2010, 2013 and Online. Very usefull for Office 365 and private clouds where you don’t have access to the physical server.

                Image

                The API uses the Managed .NET Client-Side Object Model (CSOM) of SharePoint 2013. It’s a library of PowerShell Scripts and in it’s core it talks to the CSOM dll’s.

                Examples :

                Import-Module .\spps.psm1 
                
                Initialize-SPPS -siteURL "https://example.sharepoint.com/" -online $true -username "sitecollectionadmin@example.onmicrosoft.com" -password "password"
                Example
                # Include SPPS
                Import-Module .\spps.psm1 
                
                # Setup SPPS
                Initialize-SPPS -siteURL "https://example.sharepoint.com/" -online $true -username "sitecollectionadmin@example.onmicrosoft.com" -password "password"
                
                # Activate Publishing Site Feature
                Activate-Feature -featureId "f6924d36-2fa8-4f0b-b16d-06b7250180fa" -force $false -featureDefinitionScope "Site"
                
                #Activate Publishing Web Feature
                Activate-Feature -featureId "94c94ca6-b32f-4da9-a9e3-1f3d343d7ecb" -force $false -featureDefinitionScope "Web"

                Features

                • Site Collection
                  • Test connection
                • Site
                  • Manage subsites
                  • Manage permissions
                • Lists and Document Libraries
                  • Create list and document library
                  • Manage fields
                  • Manage list and list item permissions
                  • Upload files to document library (including folders)
                  • Add items to a list with CSV
                  • Add and remove list items
                  • File check-in and check-out
                • Master pages
                  • Set system master page
                  • Set custom master page
                • Features
                  • Activate features
                • Web Parts
                  • Add Web Parts to page
                • Users and Groups
                  • Set site permissions
                  • Set list permissions
                  • Set document permissions
                  • Create SharePoint groups
                  • Add users and groups to SharePoint groups
                • Solutions
                  • Upload sandboxed solutions
                  • Activate sandboxed solutions

                  Contact me at tomas.floyd@outlook.com for this and more Azure,SharePoint & Office 365 Tools, Web Parts and Apps

                FREE – SharePoint 2013 Search Query Tool

                This tool helps you understand and learn how the available parameters on the Search REST service should be formatted.

                 

                What does this tool offer:

                • Issue HTTP GET or POST search queries.
                • See how the different Query parameters are formatted.
                • Authenticate using different users to debug security trimming issues.
                • Use against your tenant on SharePoint Online and authenticate using your SPO User ID.

                 

                 

                Go get it from:

                http://sp2013searchtool.codeplex.com/

                 

                 

                modern.IE – Great tool for testing Web Sites! Get it now!

                ImageThe modern.IE scan analyzes the HTML, CSS, and JavaScript of a site or application for common coding issues. It warns about practices such as incomplete specification of CSS properties, invalid or incorrect doctypes, and obsolete versions of popular JavaScript libraries.

                It’s easiest to use modern.IE by going to the modern.IE site and entering the URL to scan there. To customize the scan, or to use the scan to process files behind a firewall, you can clone and build the files from this repo and run the scan locally.

                How it works

                The modern.IE local scan runs on a system behind your firewall; that system must have access to the internal web site or application that is to be scanned. Once the files have been analyzed, the analysis results are sent back to the modern.IE site to generate a complete formatted report that includes advice on remediating any issues. The report generation code and formatted pages from the modern.IE site are not included in this repo.

                Since the local scan generates JSON output, you can alternatively use it as a standalone scanner or incorporate it into a project’s build process by processing the JSON with a local script.

                The main service for the scan is in the app.js file; it acts as an HTTP server. It loads the contents of the web page and calls the individual tests, located in /lib/checks/. Once all the checks have completed, it responds with a JSON object representing the results.

                Installation and configuration

                • Install node.js. You can use a pre-compiled Windows executable if desired. Version 0.10 or higher is required.
                • If you want to modify the code and submit revisions, Install git. You can choose GitHub for Windows instead if you prefer. Then clone this repository. If you just want to run locally then downloading then you just need to download the latest version from here and unzip it
                • Install dependencies. From the Modern.ie subdirectory, type: npm install
                • If desired, set an environment variable PORT to define the port the service will listen on. By default the port number is 1337. The Windows command to set the port to 8181 would be: set PORT=8181
                • Start the scan service: From the Modern.ie subdirectory, type: node app.js and the service should respond with a status message containing the port number it is using.
                • Run a browser and go to the service’s URL; assuming you are using the default port and are on the same computer, the URL would be: http://localhost:1337/
                • Follow the instructions on the page.

                Testing

                The project contains a set of unit tests in the /test/ directory. To run the unit tests, type grunt nodeunit.

                JSON output

                Once the scan completes, it produces a set of scan results in JSON format:

                {
                    "testName" : {
                        "testName": "Short description of the test",
                        "passed" : true/false,
                        "data": { /* optional data describing the results found */ }
                    }
                }

                The data property will vary depending on the test, but will generally provide further detail about any issues found.

                Great tool available for Responsive Web Pages in SharePoint!!

                Web designers are crucial for a successful SharePoint implementation. We all know that. With this in mind, I wanted to write an article for our SharePoint web designers out there. Not being an authority on the subject, I decided to ask someone who has been working in web design for some time. By asking my contacts, I got the email address of an expert in SharePoint branding and UX customization. Eric Overfield was the name on the contact card. I set up a conference call, and very soon we were chatting and discussing UX, branding, artists, engineers, and SharePoint.

                The conversation quickly turned to devices and how to make SharePoint work as well as possible in this new and changing set of displays. Eric’s answer was: responsive web design. Responsive web design allows us to look at a site like a fluid grid. The fluid, dynamic grid adapts itself to fit the information in display resolutions as different as those in a phone, a tablet, and a full desktop monitor. Keep in mind that the mix of display resolutions doubles if you consider landscape and portrait orientations available in all these devices.

                The author of the original post about responsive web design, Ethan Marcotte, provided a reference site to demo the concepts explained in his post. In this demo, you can observe how the elements in the page rearrange themselves to fit the current resolution as you resize your browser window. The demo left me wondering how a SharePoint website would react to different resolutions by using the fluid grid characteristic of responsive frameworks. Fortunately, Eric, along with some other people, developed Responsive SharePoint. Responsive SharePoint is a CodePlex project that you can use to try responsive frameworks on your SharePoint website.

                I followed the provided instructions to install the resources by using Design Manager on an out-of-the-box publishing site. In no time, I was looking at how the site dynamically reacted to different resolutions as I resized my browser window. I decided to test the project by using the following display resolutions:

                • 1200×1900 (desktop, portrait orientation)
                • 768×1366 (tablet, portrait orientation)
                • 480×800 (smartphone, portrait orientation)

                The results were amazing. Within 10 minutes, I had a SharePoint website that automatically adapts to display resolutions commonly used in devices. The following figure compares the website in commonly used display resolutions:

                Figure 1. Comparison of resolutions of the SharePoint website using a responsive frameworkFigure 1. Comparison of resolutions of the SharePoint website using a responsive framework

                How is this achieved?

                In this post, I can only explain that Responsive SharePoint uses media queries to match the width of the display in the device and then applies a set of styles to present the content in the available space. For this to work, you need a browser that supports media queries. The latest version of the major browsers support such functionality. The following code example shows how to declare media queries:

                @media (min-width: 769px) and (max-width: 979px) {
                    /*
                        Styles for display width 
                        between 769 and 979 pixels
                    */
                }
                
                @media (max-width: 768px) {
                    /*
                        Styles for display width 
                        equal to 768 pixels and thinner
                    */
                }
                
                @media (min-width: 1200px) {
                    /*
                        Styles for display width 
                        equal to 1200 pixels and wider
                    */
                }

                Of course there is much more to it. You can learn more by browsing the Responsive SharePoint CodePlex project.

                The new design and branding features in SharePoint 2013 make it easy to create and edit your web design, including responsive designs. You can even use the tools you are familiar with by mapping a network drive to the SharePoint 2013 Master Page Gallery. In my case, I used Microsoft Expression Web 4 to browse and edit the master pages and CSS files.

                Looking for the SharePoint Developer Dashboard? Look no more!

                This tool is useful for measuring the behaviour and performance of individual pages.

                Can be used to monitor page load and performance

                It has three states:

                • On,
                • Off,
                • On demand.

                Activating the Developer Dashboard

                Developer Dashboard is a utility that is available in all SharePoint 2010 versions, and can be enabled in a few different ways:

                • PowerShell
                • STSADM.exe
                • SharePoint Object Model (API’s)

                Activate the Developer Dashboard using STSADM.EXE

                The Developer Dashboard state can only be toggled with a STSADM command

                • ON:

                STSADM –o setproperty –pn developer-dashboard –pv on

                • OFF:

                STSADM –o setproperty –pn developer-dashboard –pv off

                • ON DEMAND:

                STSADM –o setproperty –pn developer-dashboard –pv ondemand

                Activate the Developer Dashboard using PowerShell:

                $DevDashboardSettings = [Microsoft.SharePoint.Administration.SPWebService]::

                ContentService.DeveloperDashboardSettings;

                $DevDashboardSettings.DisplayLevel = ‘OnDemand’;

                $DevDashboardSettings.RequiredPermissions =’EmptyMask’;

                $DevDashboardSettings.TraceEnabled = $true;

                $DevDashboardSettings.Update()

                Activate the Developer Dashboard using the SharePoint Object Model

                using Microsoft.SharePoint.Administration;

                SPWebService svc = SPContext.Current.Site.WebApplication.WebService; svc.DeveloperDashboardSettings.DisplayLevel=SPDeveloperDashboardLevel.Off; svc.DeveloperDashboardSettngs.Update();

                Where is it?

                Picture2

                You can see the Developer Dashboard button In the top right.

                Developer Dashboard 3 Border Colors

                • it has a Green border, that generally means it’s loading quick enough not to be a real problem.
                • It can also render Yellow, which indicates that there’s a slight delay and
                • it could render a Red border which would mean you definitely need to look into it immediately!

                Picture3

                You see this Developer Dashboard has a yellow border color.

                Publishing apps for Office and SharePoint to Windows Azure Websites

                This post will focus on provider-hosted apps for SharePoint and apps for Office. Provider-hosted (as opposed to SharePoint-hosted or Autohosted) means that the developer is responsible for hosting the web content – which is precisely where Azure Websites can help. At the end of the post, I will also look at advanced topics, including options for publishing to a non-Azure environment (like an on-premise server).

                Direct web publishing to Azure

                Creating a profile

                Suppose you have an app for SharePoint or an app for Office that you’re ready to publish for the first time. To begin publishing your app, choose the app for SharePoint or app for Office project, and choose “Publish”.

                Figure 1. Publish menu in Solution Explorer
                Figure 1. Publish menu in Solution Explorer

                A guided publishing experience will appear, as shown below.

                Figure 2. Guided publishing experience
                Figure 2. Guided publishing experience

                For a new project, there is no current publishing profile. You can create one by selecting <New…> from the profile dropdown, which will open the following dialog box.

                Figure 3. Creating a new publishing profile
                Figure 3. Creating a new publishing profile

                If you’re publishing to Azure, choose the “download your publishing profile” link, and you’ll be redirected to the Azure portal. There, if you have not already, you can create a new website by choosing the +NEW at bottom-left corner of the portal. The bottom portion of the screen will expand, allowing you to create a new website via the Quick Create or Custom Create menu items.

                Figure 4. Creating new website on Azure portal
                Figure 4. Creating new website on Azure portal

                Once the website entity is created, choose it from the list of websites to reveal the website details. Then choose Download the publish profile and save the profile to your computer. The profile contains all the information necessary to deploy your web content, including any auxiliary information like linked database connections.

                Figure 5. Downloading the publish profile from the Azure portal
                Figure 5. Downloading the publish profile from the Azure portal

                Back in the Visual Studio dialog box, and with the import publishing profile option still selected, choose the “browse” button and browse to the newly-downloaded file. Depending on the type of app:

                • For apps for Office, the profile is now complete.
                • For apps for SharePoint, you can now configure the Client ID and Client Secret on the second page of the wizard. These values uniquely identify your app to SharePoint, and allows the app to access SharePoint data. Client IDs and Secrets are generated and registered automatically when you debug your app, but they must be registered in a more permanent fashion when publishing the app. To do so:

                At the completion of either registration process, you will be granted a Client ID and Client Secret. Transfer those values into the Profile-creation dialog, and then choose Finish.

                Deploying and packaging

                Once the profile is set, the publishing buttons will activate. You now have a choice of deploying the web project and/or packaging the app. When publishing for the first time, you will need to do both – but it generally makes sense to start with deploying the web project first.

                Deploy your web project

                Deploying your web project is exactly what it sounds like: it will publish the entire contents of your web project (but not the SharePoint app or Office manifest) to the web. To do this, choose deploy your web project and you will see the familiar web publishing experience – complete with Preview, deployment settings, and more. The Connection tab has been pre-filled with information from your publish profile, and you can go to Settings to customize the publish configuration and options like Remove additional files at destination. Note that if your project requires a database, you can set it on the Settings page of the Wizard – and that, if your publish profile came from Azure, you can simply choose the database from the dropdown list.

                Figure 6. Deploying a web project to Azure
                Figure 6. Deploying a web project to Azure

                The Preview functionality is helpful to ensure you’re publishing the right set of files. By choosing a file in the Preview list, you can see the impending changes that you’re about to commit to your live site.

                Figure 7. Preview functionality in Visual Studio
                Figure 7. Preview functionality in Visual Studio

                Packaging your app

                Once the web project is deployed, packaging the app is designed to be simple, and most fields should be pre-populated. If you used a publishing profile, the URL will already be pre-populated, though you’ll need to change the URL from “http” to “https”. Note that with Azure Websites, https hosting is automatically included for any website hosted on *.azurewebsites.net (for custom domains or other hosting providers, you may need to follow additional steps).

                For apps for Office, that’s all you need: Just choose Finish, and a manifest file that points to your live web content will get generated for you. For apps that you wish to sell on the Office Store, see the next section. Otherwise, if it’s just an in-house app, you can upload the app to a file share or to a corporate catalog.

                For apps for SharePoint, you will need to provide or confirm the Client ID, which you may have already entered during the profile-creation step. After that, click “finish” – and an app package will get created for you. Again, see the next section for apps headed for the Office Store. Otherwise, if you only intend to distribute the app to users of your SharePoint site, follow the documentation for uploading the app to a SharePoint corporate catalog.

                Publishing to the Office Store

                After your app package (apps for SharePoint) or manifest file (apps for Office) is created, you can use the Visit the Seller Dashboard button to get started with publishing to the Office Store.

                For apps for Office, you can also run your app through a validation utility, which will catch common mistakes (like not specifying required information in the manifest). This will save you time and hassle when submitting the app to the Store.

                Figure 8. Validation utility for apps for Office
                Figure 8. Validation utility for apps for Office

                Upgrading an app

                When it comes time to upgrade an existing app that you have already published, what steps do you need to take?

                For both apps for Office and apps for SharePoint, if all that you’ve updated is just in the web project, you can just re-publish the web content via the Deploy your web project button. These changes will go live immediately, and you’re fully in control of deploying these whenever you’d like.

                If you made changes to the app manifest (apps for Office and apps for SharePoint) or if you have modified any SharePoint artifacts (lists, event receivers, or anything outside the web project), you need to re-publish those artifacts instead via the Package the app button. If your app is listed on the Office Store, you will then need to re-submit to the produced app package or app manifest to the Store, so it may take a few days before those changes go live – and remember that applying an update is at the discretion of the user.

                In general, remember to be considerate of the upgrade impact when modifying anything outside of the web project. Especially for apps for SharePoint, which have a more involved upgrade process, see the Apps for SharePoint update process article for an in-depth upgrade discussion and for guidance on how to avoid breaking older app packages when deploying new web artifacts.

                Advanced topics

                Specifying multiple publish environments

                One common request we heard is to publish an app to different environments. For example, one might want to publish to a “staging” environment first, ensure that the app works properly, and only then publish to “production”.

                With the new Publish experience, switching between multiple environments is only a dropdown away. Each publish profile remembers its own URL, Client ID, and Secret, so publishing to a different profile is as easy as changing the profile dropdown and choosing the appropriate “Deploy your web project” and “Package the app” buttons.

                Figure 9. Publishing to multiple environments
                Figure 9. Publishing to multiple environments

                Configuring Client Secret (or other environmental variables) in the Azure Portal

                Sometimes, the “Client Secret” for the production app might be a closely-guarded secret. As a developer, you might have the ability to publish to the website, but you might not have access to the Client Secret itself. The same thing might be true for any other such variables.

                One way to solve this scenario is to have your Azure account Admin manage these environmental variables through the Azure Portal. For each Azure Website, it is possible to have the Client Secret – or any other variables – be set via “app settings” section of the Configure tab. The “app settings” values take precedence over values in Web.config, so you get the best of both worlds – your local F5 scenario continues to work as before, yet your published app can make of a Client Secret that you might not even have access to.

                Figure 10. Configuring a Client Secret in the Azure portal
                Figure 10. Configuring a Client Secret in the Azure portal

                Deploying outside of Azure (or to a local IIS server)

                If you need to deploy to a non-Azure hosting provider – particularly if you are publishing to an on-premises machine – you can still use the many improvements to the app-publishing experience.

                During profile-creation, choose the Create new profile radio button.

                Then, once you are ready to deploy your web content, enter the Connection credentials in the “Publish Web” wizard.

                The rest of the flow should be the same. Remember to ensure that your hosting server supports the HTTPS protocol.

                Creating a Web Deploy Package

                An alternate, but similar, case for publishing to a local IIS server is when only an IT admin has the ability to publish a website. To simplify deployment, you can provide the IT admin with a Web Deploy Package – a .zip file that contains all web artifacts.

                To do this, create a new profile rather than importing one from Azure. In the case of an app for SharePoint, you will also need to fill in some dummy Client ID and Secret values.

                Now go to Deploy your web project – but be sure to choose Web Deploy Package as the publish method in the “Connection” tab.

                Figure 11. Creating a web deploy package
                Figure 11. Creating a web deploy package

                Choose a package location (any new folder will do) and proceed with the wizard. At the end, a set of deploy scripts and a .zip file with your web content will be generated.

                Figure 12. Web deploy package files
                Figure 12. Web deploy package files

                Your IT Admin should be able to take things from here (registering the app with SharePoint and providing the Client ID and Secret into the deployment scripts). Once the web content is deployed, ask your admin to provide you with the Client ID (the secret is not needed) and proceed with the “Package the app” step. You can then send the app package – now containing the SharePoint artifacts, and pointing at the live web content – back to the IT admin to deploy to SharePoint.

                Enhanced by Zemanta

                BCS Solution Packaging Tool

                Resource Page Description

                735240_BCSSolutionPackagingTool48  http://archive.msdn.microsoft.com/odcsps14bcsgnrtrtool/Release/ProjectReleases.aspx?ReleaseId=4352
                The Business Connectivity Services (BCS) Solution Packaging Tool is intended for Power Users and Developers of Microsoft Business Connectivity Services solutions.

                With this tool you can create:
                – Intermediate Declarative Solutions (for Microsoft Outlook 2010)
                – Advanced Code-Based Solutions (utilizing a Solution Manifest, for Microsoft Outlook 2010)
                – Data Solutions (for use with Microsoft Office Professional Plus 2010 Add-Ins)

                This tool includes a help file.

                NEW
                A sample solution has been added to the Downloads page (click “Sample Solution” in the Releases navigation at the right side of the page).

                NEW
                The tool now has an option to validate the artifacts and their connections before packaging. This option is on by default, but can be turned off to speed up the packaging process.

                SharePoint Samurai