Category Archives: API

How To : Manually add common consent to your Office 365 APIs Preview app

Windows_Azure_Wallpaper_p754[1]office365logoorange_web[1]

 

Learn how to manually add Microsoft Azure Active Directory common consent to your ASP.NET application so that it can access secured services.

Prerelease content Prerelease content
The features and APIs documented in this article are in preview and are subject to change. Do not use them in production.

In this article, you’ll learn how to build a web application hosted on an Azure website that uses the OneDrive for Business API to access secured folders and files.

You can easily set up access to the OneDrive for Business using the Office 365 API Preview Tools for Visual Studio 2013. If you’re not using the tools, you’ll need to manually set up your app in your development environment, register your app with Microsoft Azure Active Directory, write code to handle tokens, and write the code to work with the OneDrive for Business resources. All these steps are described in this article.

Note Note
This article covers OneDrive for Business apps, but the same steps apply to apps that access any other secured resource.

Before you manually add common consent to your app, make sure that you have the following:

  • An Office 365 account. If you don’t have one, you can sign up for an Office 365 developer site.
  • Visual Studio 2012 or Visual Studio 2013.
    Note Note
    The Office 365 API Preview Tools for Visual Studio 2013, which simplify development, are available for Visual Studio 2013 only.
  • A test account to use in your application.

We also recommend that you familiarize yourself with the Authorization Code Grant Flow. This will help you understand the authentication process that takes place in the background between your application, Azure AD, and the Office 365 resource so that you can better troubleshoot as you develop.

If you’ve already created an account within your Azure tenancy, you can use that account. Otherwise, you will have to create a new organizational user account to use in this sample.

To create an organizational user account

  1. Go to https://manage.windowsazure.com/.
  2. Choose the Active Directory icon on the left side in the Azure portal.
  3. Choose Add a user.
  4. Fill in the user name.
  5. Move to the next screen.
  6. Create a user profile. To do this:
    1. Enter a first and last name.
    2. Enter a display name.
    3. Set the Role to Global Administrator.
    4. After you set the role you will be asked for an alternate email address. You can enter the email address that you used to create the subscription, or a different one.
  7. Move to the next screen.
  8. Choose create.
  9. A temporary password is generated. You will use this to sign in later. You will have to change it at that time.
  10. Choose the check mark to finish creating the organizational user account.

The next step is to create the actual app that contains the UI and code needed to work with the OneDrive for Business REST APIs to list the folders and files in the user’s OneDrive.

To create the Visual Studio project

  1. Open Visual Studio 2013 and create a new ASP.NET Web Application project. Name the application Get_Stats. Choose OK.
  2. Choose the MVC template and choose the Change Authentication button. Select the Organization Accounts option. This will display additional options for authentication.
  3. Choose Cloud – Single Organization.
  4. Specify the domain of your Azure AD tenancy.
  5. Set the Access Level to Single Sign On, Read directory data.

    Under More Options, you will see the App ID URI is set automatically.

  6. Choose OK to continue. This brings up a dialog box to authenticate.
    Note Note
    If you receive an invalid domain name error, you might need to implement a workaround by substituting a real domain name, such as *.onmicrosoft.com, from another Azure subscription that you have. When you complete the Visual Studio new project dialog box, Visual Studio creates a temporary app registration entry on the domain that you specify. You can delete that entry later.

    As part of the workaround, you need to adjust the web.config settings and manually register the web app in the correct Azure AD domain.

  7. Enter the credentials of the user you created earlier.
  8. Choose OK to finish creating the new project. Visual Studio will automatically register the new web app in the Azure AD tenant you specified.
  9. Run the Visual Studio project, and sign on using the test account you created earlier. After the project is running, you can verify that single-sign on is working because the test account user name is displayed in the upper right corner of the web app.
  1. Log on with your Azure account.
  2. In the left navigation, choose Active Directory. Your directory will be listed.
  3. Choose your directory.
  4. In the top navigation, choose Applications.
  5. On the Active Directory tab, choose Applications.
  6. Add a new application in your Office 365 domain (created at Office 365 sign up) by choosing the “ADD” icon at the bottom of the portal screen. This will bring up a dialog box to tell Azure about your application.
  7. Choose Add an application my organization is developing.
  8. For the name of the application, enter Get Stats. For the Type, leave Web application and/or Web API. Then choose the arrow to move to step 2.
  9. For the Sign-On URL, enter the localhost URL from your Get_Stats Visual Studio project. To find the URL:
    1. Open your project in Visual Studio.
    2. In Solution Explorer, choose the Get_Status project.
    3. From the Properties window, copy the SSL URL value.
    4. Enter an App ID URI. Because the ID must be unique, it’s a good idea to choose a name that is similar to the app name. For example, you can use your Sign-on URL with your app name, such as https://locahost:44044/Get_Stats.
    5. Choose the checkmark to finish adding the application. You will be notified that the application was added successfully.
  1. Copy the APP ID URI to the clipboard.
  2. In your Get_Stats Visual Studio project, open the web.config file.
  3. Locate the ida:Realm key and paste the APP ID URI for the value.
  4. Locate the ida:AudienceUri key and paste the same APP ID URI for the value.
  5. Locate the audienceUris element and paste the same APP ID URI for the add element’s value.
  6. Locate the wsFederation element, and paste the same APP ID URI for the realm.
  7. In the Azure Portal, copy the federation metadata document URL to the clipboard.
  8. In the web.config file, locate the ida:FederationMetadataLocation key, and paste the URL for the value.
  9. In the Azure Portal, choose the View Endpoints icon at the bottom.
  10. Copy the WS-Federation Sign-On Endpoint to the clipboard.
  11. In the web.config file, locate the wsFederation element and paste the endpoint value for the issuer.
  12. Save your changes and run the project. You will be prompted to sign on. Sign on by using the test account you created earlier. You should see your account user name displayed in the upper right corner of the web app.

Get an application key


Next, you need to generate a key that you can use to identify your application for access tokens.

To get an application key for your app

  1. In the Azure Portal, select the Get_Stats application in the directory.
  2. Choose the Configure command and then locate the keys section.
  3. In the Select duration drop-down box, choose 1 year.
  4. Choose Save.

    The key value is displayed.

    Note Note
    This is the only time that the key is displayed.
  5. In Visual Studio, open the Get_Stats project, and open the web.config file.
  6. Locate the ida:Password element, and paste the key value for the value. Now your project will always send the correct password when it is requested.
  7. Save all files.
Configure API permissions


You need to specify which web APIs your web app needs access to, and what level of access it needs. This determines what scopes and permissions are requested on the consent form for your web app that is displayed for users and admins.

To configure API permissions

  1. In the Azure Portal, select the Get Status application in the directory.
  2. From the top navigation, choose Configure. This displays all the configuration properties.
  3. At the bottom is a web apis section. Notice that your web app has already been granted access to Azure AD.
  4. Choose Office365 SharePoint Online API.
  5. Choose Delegated Permissions and select Read items in all site collections.
    Note Note
    The options activate when you move over them.
  6. Choose Save to save these changes. Your web app will now request these permissions.

    You can also manage permissions by using a manifest. You can download your manifest file by choosing Manage Manifest.

Add the GraphHelper project to your solution


The easiest way to call graph APIs in Azure AD is to use the Graph API Helper Library. The following instructions show how to include the GraphHelper project into your Get_Stats solution.

To configure the Graph API Helper Library

  1. Download the Azure AD Graph API Helper Library.
  2. Copy the C# folder from the Graph API Helper Library to your project folder (i.e. \Projects\Get_Stats\C#.)
  3. Open the Get_Stats solution in Visual Studio.
  4. In the Solution Explorer, choose the Get-Stats solution and choose Add Existing Project.
  5. Go to the C# folder you copied, and open the WindowsAzure.AD.Graph.2013_04_05 folder.
  6. Select the Microsoft.WindowsAzure.ActiveDirectory.GraphHelper project and choose Open.
  7. If you are prompted with a security warning about adding the project, choose OK to indicate that you trust the project.
  8. Choose the Get_Stats project References folder and then choose Add Reference.
  9. In the Reference Manager dialog box, select Extensions and then select the Microsoft.Data.OData version 5.6.0.0 assembly and the Microsoft.Data.Services.Client version 5.6.0.0 assembly.
  10. In the same Reference Manager dialog box, expand the Solution menu on the left, and then select the checkbox for the Microsoft.WindowsAzure.ActiveDirectory.GraphHelper.
  11. Choose OK to add the references to your project.
  12. Add the following using directives to the top of HomeController.cs.
    using Microsoft.WindowsAzure.ActiveDirectory;
    using Microsoft.WindowsAzure.ActiveDirectory.GraphHelper;
    using System.Data.Services.Client;
    
    
  13. Save all files.

Add code to manage tokens and requests


Because your web app accesses multiple workloads, you need to write some code to obtain tokens. It’s best to place this code in some helper methods that can be called when needed.Note that the Office 365 API Preview tools will handle all this coding for you.

Your custom code handles the following scenarios:

  • Obtaining an authentication code
  • Using the authentication code to obtain an access token and a multiple resource refresh token
  • Using the multiple resource refresh token to obtain a new access token for a new workload

To create code to manage tokens and requests

  1. Open your Visual Studio project for Get_Stats.
  2. Open the HomeController.cs file.
  3. Create a new method named Stats by using the following code.
    public ActionResult Stats()
    {
        var authorizationEndpoint = "https://login.windows.net/"; // The oauth2 endpoint.
        var resource = "https://graph.windows.net"; // Request access to the AD graph resource.
        var redirectURI = ""; // The URL where the authorization code is sent on redirect.
    
        // Create a request for an authorization code.
        string authorizationUrl = string.Format("{1}common/oauth2/authorize?&response_type=code&client_id={2}&resource={3}&redirect_uri={4}",
               authorizationEndpoint,
               ClaimsPrincipal.Current.FindFirst(TenantIdClaimType).Value,
               AppPrincipalId,
               resource,
               redirectURI);
    
    
  4. The Stats method constructs a request for an authorization code and sends the request to the Oauth2 endpoint. If successful, the redirect returns to the specified CatchCode URL. Next, create a method to handle the redirect to CatchCode.
    public ActionResult CatchCode(string code)
    {}
    
    
  5. Acquire the access token by using the app credentials and the authorization code. Use your project’s correct port number in the following code.
    //  Replace the following port with the correct port number from your own project.
        var appRedirect = "https://localhost:44307/Home/CatchCode";
    
    //  Create an authentication context.
        AuthenticationContext ac = new AuthenticationContext(string.Format("https://login.windows.net/{0}",
        ClaimsPrincipal.Current.FindFirst(TenantIdClaimType).Value));
    
    //  Create a client credential based on the application ID and secret.
    ClientCredential clcred = new ClientCredential(AppPrincipalId, AppKey);
    
    //  Use the authorization code to acquire an access token.
        var arAD = ac.AcquireTokenByAuthorizationCode(code, new Uri(appRedirect), clcred);
    
    
  6. Next use the access token to call the Graph API and get the list of users for the Office 365 tenant. Paste the list into the following code.
    //  Convert token to the ADToken so you can use it in the graphhelper project.
    
        AADJWTToken token = new AADJWTToken();
        token.AccessToken = arAD.AccessToken; 
    
    //  Initialize a graphService instance by using the token acquired in the previous step.
    
        Microsoft.WindowsAzure.ActiveDirectory.DirectoryDataService graphService = new DirectoryDataService("09f9ea02-9be8-4597-86b9-32935a17723e", token);
        graphService.BaseUri = new Uri("https://graph.windows.net/09f9ea02-9be8-4597-86b9-32935a17723e");
    
    //  Get the list of all users.
    
        var users = graphService.users;
        QueryOperationResponse<Microsoft.WindowsAzure.ActiveDirectory.User> response;
        response = users.Execute() as QueryOperationResponse<Microsoft.WindowsAzure.ActiveDirectory.User>;
        List<Microsoft.WindowsAzure.ActiveDirectory.User> userList = response.ToList();
        ViewBag.userList = userList; 
    
    
  7. Now you need to call Microsoft OneDrive for Business, and this requires a new access token. Verify that the current token is a multiple resource refresh token, and then use it to obtain a new token. Paste the token into the following code.
    //  You need a new access token for new workload. Check to determine whether you have the MRRT.
    
        if (arAD.IsMultipleResourceRefreshToken)
        {
            // This is an MRRT so use it to request an access token for SharePoint.
            AuthenticationResult arSP = ac.AcquireTokenByRefreshToken(arAD.RefreshToken, AppPrincipalId, clcred, "https://imgeeky.spo.com/");
        }
    
    
  8. Finally, call Microsoft OneDrive for Business to get a list of files in the Shared with Everyone folder. Paste the list into the following code and replace any placeholders with correct values.
    //  Now make a call to get a list of all files in a folder. 
    //  Replace placeholders in the following string with correct values for your domain and user name. 
        var skyGetAllFilesCommand = "https://YourO365Domain-my.spo.com/personal/YourUserName_YourO365domain_spo_com/_api/web/GetFolderByServerRelativeUrl('/personal/YourUserName_YourO365domain_spo_com/Documents/Shared%20with%20Everyone')/Files";
    
        HttpWebRequest request = (HttpWebRequest)WebRequest.Create(skyGetAllFilesCommand);
        request.Method = "GET";
    
        WebResponse wr = request.GetResponse();
    
        ViewBag.test = wr.ToString();
    
        return View(); 
    
    
  9. Create a view for the CatchCode method. In Solution Explorer, expand the Views folder and choose Home, and then choose Add View.
  10. Enter CatchCode as the name of the new view, and choose Add.
  11. Paste the following HTML to render the users and Microsoft OneDrive for Business response from the CatchCode method.
    @{
        ViewBag.Title = "CatchCode";
    }
    <h2>Users</h2>
    <ul id="users">
    
        @foreach (var user in ViewBag.userList)
        {
            <li>@user.displayName</li>
        }
    </ul>
    <h2>OneDrive for Business Response</h2>
    <p>@ViewBag.skyResponse</p>
    
    
  12. Build and run the solution. Verify that you get a list of users, and that you get an XML response from the OneDrive for Business method call. To change the XML response, add files to the OneDrive for Business Share with Everyone folder.

How To : Use Git Tools for TFS Integration

Git – TFS Integration – Why it matters

 taking-your-version-control-to-a-next-level-with-tfs-and-git-1-638[1]

For many small development shops, the idea of using TFS and a centralized source control repository is anathema. The mere thought of being restricted by a software configuration manager on how and when to branch or merge cuts against everything they cherish in software development.

 

Git is their natural and chosen ground for managing source code. The freedom and flexibility of using Git enables them to work where they are. This is especially true if they are working as part of a distributed team on modular projects.

Microsoft addressed many of the existing concerns with TFS source control with the advent of TFS 2012 and local workspaces. However, even though local workspaces enable great flexibility in offline work, they are still ultimately tied to a central repository and the policies and restrictions imposed on it.

 

Enter Git support in TFS. Git support currently comes in two forms; stand alone Git support in Visual Studio and Git support with TFS.

Git support with Visual Studio is completely straightforward. Simply change the source control plug-in selection to the Microsoft Git Provider and all the power and flexibility of Git is available to the Visual Studio developer such as private branches and online collaboration with Git hosts such as GitHub and BitBucket.

Source

Configuring Git for Visual Studio Source Control

However, from an ALM perspective, the real power and the compelling feature of Microsoft’s integration with Git is the ability to work with TFS.

 

Developers still get all the advantages and flexibility of Git, but can also take advantage of the ALM features of TFS such as work item tracking, team tools and integrated build. The Git – TFS integration gets us much closer to the ultimate goal of true cross-platform support in a single ALM toolset.

 

The TFS – Git integration can be utilized a couple of ways. The first option is the ability to essentially synchronize a Git repository with TFS source control with the Git-TF utility. This utility makes it easy to clone sources from TFS, fetch updates from TFS and push changes back to TFS.

 

What’s more, it fully supports TFS shelvesets and work item integration, which presents some exciting possibilities. The features and functionality Git-TF provides makes it a compelling solution and a credible compromise between centrally managed teams with source control and distributed teams with distributed source control.

The second option, available now only through Microsoft’s hosted TFS Service, is the ability for organizations to create TFS Team Projects with Git hosted source control (this ability is reportedly planned for on premise TFS support in the next release). This is a fairly exciting development.

 

Having the choice between native TFS version control and Git when creating a team project opens many doors that hitherto were locked shut.

remoteRepo1

XCode IDE connected to a TFS hosted repository

Eclipse, XCode, Visual Studio and any other IDE that supports Git can now be used to leverage the powerful ALM features TFS provides.

As an ALM consultant, that’s the part that excites me the most. Hosting all development efforts in a single environment; an environment that supports all the various technologies in play and being able to track and manage those efforts with agility and transparency is a huge benefit to any organization that provides multiple platform solutions.

 

Even those who don’t, will now have the option to at least evaluate the feasibility of utilizing TFS in development environments not typically associated with a Microsoft project.

The mythical promised land of cross-platform ALM may have just become quite less mythical.

 

———————Microsoft’s Tool for Git and TFS Integration – ———————————————————————–

 http://gittf.codeplex.com

2352.vs_5F00_heart_5F00_git_5F00_thumb_5F00_66558A9C[1]

Working with Teams

The Git-TF tool is most easily used by a single developer or multiple developers working independently with their own isolated Git repos. That is, each developer uses Git-TF to clone a local repo where they can then use Git to manage their local development that will eventually be checked in to TFS. In this “hub and spoke” configuration, all code is shared through TFS at the “hub” and each developer using Git becomes a “spoke”. Developers looking to collaborate using Git’s distributed sharing capabilities will want to work in a specific configuration described below.

Most often, developers collaborating with Git have cloned from a common repo. When it comes time to share divergent changes, conflict resolution is easy because each repository shares the same common base version. Many times, conflicts are automatically resolved. One of the keys to this merging of histories is that each commit is assigned a unique identifier that is generated by the contents of the commit. When working with Git-TF, two repositories cloned from the same TFS path will not have the same commit IDs unless the clones were done at the same point in TFS history, and with the same depth. In the event that two Git repos that were independently cloned using Git-TF share changes directly, the result will be a baseless merge of the repositories and a large number of conflicts. For this reason, it is not recommended that teams using Git-TF ever share changes directly through Git (i.e. using git push and git pull).

Instead, it is recommended that a team working with Git-TF and collaborating with Git do so by designating a single repo as the point of contact with TFS. This configuration may look as follows for a team of three developers:

          [TFS]      [Shared Git repo]
            |         ^ (2)  |       \
            |        /       |        \
            |       /        |         \
            V (1)  /         V (3)      V (4)
       [Alice's Repo]   [Bob's Repo]   [Charlie's Repo]
 

In the configuration above the actions would be as follows:

  1. Using the git tf clone command, Alice clones a path from TFS into a local Git repo.
  2. Next, Alice uses git push to push the commit created in her local Git repo into the team’s shared Git repo.
  3. Bob can then use git clone to clone down the changes that Alice pushed.
  4. Charlie can also use git clone to clone down the changes that Alice pushed.

Both Bob and Charlie only ever interact with the team’s shared Git repo using git push and git pull. They can also interact directly with one another’s repos (or with Alice’s) , but should never use Git-TF commands to interact with TFVC.

When working with the team, Alice will typically develop locally and use git push and git pull to share changes with the team. When the team decides they have changes to share with TFS, Alice will use a git tf checkin to share those changes (typically a git tf checkin –shallow will be used). Likewise, if there are changes that the team needs from TFVC, Alice will perform a git tf pull, using the –merge or –rebase options as appropriate, and then use git push to share the changes with the team.

Note that (until Issue 77 is addressed) all changes coming into the TFVC repository will come in as if from Alice’s TFS identity. This is fine if only Alice has an identity on that TFVC project but it may well not be what you want if Bob and Charlie also had valid identities in that TFS project.

Rebase vs. Merge

Once changes have been fetched from TFS using git tf pull (or git tf fetch), those changes must either be merged with the HEAD or have any changes since the last fetch rebased on top of FETCH_HEAD. Git-TF allows developers to work in either manner, though if the repo that is sharing changes with TFS has shared any commits with other Git users, then this rebase may result in significant conflicts (see The Perils of Rebasing). For this reason, it is recommended that any team working in the aforementioned team configuration use git tf pull with the default –merge option (or use git merge FETCH_HEAD to incorporate changes made in TFS after fetching manually).

Recommended Git Settings

When using the Git-TF tools, there are a few recommended settings that should make it easier to work with other developers that are using TFS.

Line Endings

core.autocrlf = false

Git has a feature to allow line endings to be normalized for a repository, and it provides options for how those line endings should be set when files are checked out. TFS does not have any feature to normalize line endings – it stores exactly what is checked in by the user. When using Git-TF, choosing to normalize line endings to Unix-style line endings (LF) will likely result in TFS users (especially those using VS) changing the line endings back to Windows-style line endings (CRLF). As a result, it is recommended to set the core.autocrlf option to false, which will keep line endings unchanged in the Git repo.

Ignore case

core.ignorecase = true

TFS does not allow multiple files that differ only in case to exist in the same folder at the same time. Git users working on non-Windows machines could commit files to their repo that differ only in case, and attempting to check in those changes to TFS will result in an error. To avoid these types of errors, the core.ignorecase option should be set to true.

How To : Create, Edit and Maintaining a Coded UI Test for Silverlight Application

Using the Microsoft Visual Studio 2013 Coded UI Test plugin for Silverlight, you can create Coded UI Tests or action recordings for Silverlight 5.0 applications.

Using Microsoft Microsoft Visual Studio 2010 Feature Pack 2, you can create coded UI tests or action recordings for Silverlight 4 applications. Action recordings let you fast forward through steps in a manual test. For more information about action recordings or coded UI tests, see How to: Create an Action Recording or How to: Create a Coded UI Test.

In this walkthrough, you will learn the procedures that are required to test a Silverlight control in a Silverlight based application. The walkthrough takes you through the following procedures:

Prerequisites

 

For this walkthrough you will need:

To prepare the walkthrough

  1. Verify that you have the Silverlight 4 developer runtime available at Silverlight Developer 4 for Developers.

  2. Verify that you have completed the procedures in Walkthrough: Creating a RIA Services Solution.

    The result will be a simple Silverlight application that uses a Silverlight grid control. Later, you will use the grid control in this walkthrough and perform coded UI tests on it.

  3.  

     

    For more information about supported and unsupported Silverlight controls, see How to: Set Up Your Silverlight Application for Testing.

  4. With the RIAServicesExample you created in Walkthrough: Creating a RIA Services Solution running, copy the address of the Web application to the clipboard or a notepad file. For example, the address might resemble this: http://localhost: <port number>/RIAServicesExampleTestPage.aspx.

Add the SilverlightUIAutomationHelper.dll to Your Silverlight 4 Project

 

To test your Silverlight applications, you must add Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper.dll as a reference to your Silverlight 4 application so that the Silverlight controls can be identified. This helper assembly instruments your Silverlight application to enable the information about a control to be available to the Silverlight plugin API that you use in your coded UI test or is used for an action recording.This assembly cannot be redistributed. Therefore, you must add this reference conditionally when you want to build the application. By taking this approach the assembly is not redistributed when you deploy your software to a customer.

To add the SilverlightUIAutomationHelper.dll

  1. For each Silverlight project in your solution that you want to test, you must add the SilverlightUIAutomationHelper.dll. In Solution Explorer, right-click the RIAServicesExample project, select Unload Project.

    The project is displayed in Solution Explorer as RIAServicesExample (unavailable).

  2. Right-click the project again and then click Edit RIAServicesExample.csproj.

    The RIAServicesExample.csproj file is opened in the Code Editor. You will see <PropertyGroup> nodes followed by <ItemGroup> nodes. You must make the following two modifications:

    1. To set the production condition, add the following entry to the first <PropertyGroup> node:

       
      <Production Condition="'$(Production)'==''">False</Production>
      
    2. To add the DLL when the build is not a production build, insert the following <Choose> node after the <PropertyGroup> nodes, but before the <ItemGroup> nodes:

       
      <Choose>
         <When Condition=" '$(Production)'=='False' ">
               <ItemGroup>
                 <Reference Include="Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper">
                 </Reference>
               </ItemGroup>
             </When>
       </Choose>
      
  3. To save the file, click Save.

  4. To reload these changes, right-click the server project and then click Reload Project

    Caution noteCaution

    If you have multiple Silverlight projects that you want to test, you must follow these steps for each project.

    Important noteImportant

    To remove the SilverlightUIAutomationHelper.dll so that it is not redistributed with your production code, set the production condition value to true in the first <PropertyGroup> node. In in this manner, the DLL is no longer added as a reference by the Choose node that you added to the project in the previous procedure. You can also set an environment variable named Production to the value True. Then you can use msbuild to build the Silverlight project and remove the SilverlightUIAutomationHelper.dll.

Create a Coded UI Test for RIAServicesExample Silverlight Application

 

To Create a Coded UI Test

  1. In Solution Explorer, right-click the solution, click Add and then select New Project.

    The Add New Project dialog box appears.

  2. In the Installed Templates pane, expand either Visual C# or Visual Basic, and then select Test.

  3. In the middle pane, select the Test Project template.

  4. Click OK.

    In Solution Explorer, the new test project named TestProject1 is added to your solution. Either the UnitTest1.cs or UnitTest1.vb file appears in the Code Editor. You can close the UnitTest1 file because it is not used in this walkthrough.

  5. In Solution Explorer, right-click TestProject1, click Add and then select Coded UI test.

    The Generate Code for Coded UI Test dialog box appears.

  6. Select the Record actions, edit UI map or add assertions option and then click OK.

    The UIMap – Coded UI Test Builder appears.

    For more information about the options in the dialog box, see How to: Create a Coded UI Test.

  7. Click Start Recording on the UIMap – Coded UI Test Builder. In several seconds, the Coded UI Test Builder will be ready.

    Start recording UI

  8. Launch Internet Explorer.

  9. In Internet Explorer’s address bar, enter the address of the Web application that you copied in a previous procedure. For example:

    http://localhost: <port number>/RIAServicesExampleTestPage.aspx

  10. Click one or two of the column headers to sort the data.

  11. Close Internet Explorer.

  12. On the UIMap – Coded UI Test Builder, click Generate Code.

  13. In the Method Name type SimpleSilverlightAppTest and then click Add and Generate. In several seconds, the Coded UI test appears and is added to the Solution.

  14. Close the UIMap – Coded UI Test Builder.

    The CodedUITest1.cs file appears in the Code Editor.

    NoteNote

    You can assign a unique automation property based on the type of Silverlight control in your application. For more information, see Set a Unique Automation Property for Silverlight Controls for Testing.

Run the Coded UI Test on the RIAServicesExample Silverlight Application

 

To run the coded UI test

  • On the Test menu, select Windows and then click Test View.In Test View, select CodedUITestMethod1 under the Test Name column and then click Run Selection in the toolbar.

    The coded UI test should successfully run using the Silverlight data grid control.

How To : Develop and Deploy Azure Applications Deep Dive (Part 1)

micorosftazurelogo[1]

 

Senior C# SharePoint Developer with 10 years experience and BSC Degree looking for serious new technical challenges and collaborative, team environment (Gauteng, South Africa)

CV available at http://1drv.ms/1lR6qtO

hero-for-hire_basic-layout_600

There are many reasons to deploy an application or services onto Azure, the Microsoft cloud services platform. These include reducing operation and hardware costs by paying for just what you use, building applications that are able to scale almost infinitely, enormous storage capacity, geo-location … the list goes on and on.

Yet a platform is intellectually interesting only when developers can actually use it. Developers are the heart and soul of any platform release—the very definition of a successful release is the large number of developers deploying applications and services on it. Microsoft has always focused on providing the best development experience for a range of platforms—whether established or emerging—with Visual Studio, and that continues for cloud computing. Microsoft added direct support for building Azure applications to Visual Studio 2010 and Visual Web Developer 2010 Express.

This article will walk you through using Visual Studio 2010 for the entirety of the Azure application development lifecycle. Note that even if you aren’t a Visual Studio user today, you can still evaluate Azure development for free, using the Azure support in Visual Web Developer 2010 Express.

Creating a Cloud Service

Start Visual Studio 2010, click on the File menu and choose New | Project to bring up the New Project dialog. Under Installed Templates | Visual C# (or Visual Basic), select the Cloud node. This displays an Enable Azure Tools project template that, when clicked, will show you a page with a button to install the Azure Tools for Visual Studio.

Before installing the Azure Tools, be sure to install IIS on your machine. IIS is used by the local development simulation of the cloud. The easiest way to install IIS is by using the Web Platform Installer available at microsoft.com/web. Select the Platform tab and click to include the recommended products in the Web server.

Download and install the Azure Tools and restart Visual Studio. As you’ll see, the Enable Azure Tools project template has been replaced by a Azure Cloud Service project template. Select this template to bring up the New Cloud Service Project dialog shown in Figure 1. This dialog enables you to add roles to a cloud service.

image: Adding Roles to a New Cloud Service Project

Figure 1 Adding Roles to a New Cloud Service Project

A Azure role is an individually scalable component running in the cloud where each instance of a role corresponds to a virtual machine (VM) instance.

There are two types of role:

  • A Web role is a Web application running on IIS. It is accessible via an HTTP or HTTPS endpoint.
  • A Worker role is a background processing application that runs arbitrary .NET code. It also has the ability to expose Internet-facing and internal endpoints.

As a practical example, I can have a Web role in my cloud service that implements a Web site my users can reach via a URL such as http://%5Bsomename%5D.cloudapp.net. I can also have a Worker role that processes a set of data used by that Web role.

I can set the number of instances of each role independently, such as three Web role instances and two Worker role instances, and this corresponds to having three VMs in the cloud running my Web role and two VMs in the cloud running my Worker role.

You can use the New Cloud Service Project dialog to create a cloud service with any number of Web and Worker roles and use a different template for each role. You can choose which template to use to create each role. For example, you can create a Web role using the ASP.NET Web Role template, WCF Service Role template, or the ASP.NET MVC Role template.

After adding roles to the cloud service and clicking OK, Visual Studio will create a solution that includes the cloud service project and a project corresponding to each role you added. Figure 2 shows an example cloud service that contains two Web roles and a Worker role.

image: Projects Created for Roles in the Cloud Service

Figure 2 Projects Created for Roles in the Cloud Service

The Web roles are ASP.NET Web application projects with only a couple of differences. WebRole1 contains references to the following assemblies that are not referenced with a standard ASP.NET Web application:

  • Microsoft.WindowsAzure.Diagnostics (diagnostics and logging APIs)
  • Microsoft.WindowsAzure.ServiceRuntime (environment and runtime APIs)
  • Microsoft.WindowsAzure.StorageClient (.NET API to access the Azure storage services for blobs, tables and queues)

The file WebRole.cs contains code to set up logging and diagnostics and a trace listener in the web.config/app.config that allows you to use the standard .NET logging API.

The cloud service project acts as a deployment project, listing which roles are included in the cloud service, along with the definition and configuration files. It provides Azure-specific run, debug and publish functionality.

It is easy to add or remove roles in the cloud service after project creation has completed. To add other roles to this cloud service, right-click on the Roles node in the cloud service and select Add | New Web Role Project or Add | New Worker Role Project. Selecting either of these options brings up the Add New Role dialog where you can choose which project template to use when adding the role.

You can add any ASP.NET Web Role project to the solution by right-clicking on the Roles node, selecting Add | Web Role Project in the solution, and selecting the project to associate as a Web role.

To delete, simply select the role to delete and hit the Delete key. The project can then be removed.

You can also right-click on the roles under the Roles node and select Properties to bring up a Configuration tab for that role (see Figure 3). This Configuration tab makes it easy to add or modify the values in both the ServiceConfiguration.cscfg and ServiceDefinition.csdef files.

Figure 3 Configuring a Role

Figure 3 Configuring a Role

When developing for Azure, the cloud service project in your solution must be the StartUp project for debugging to work correctly. A project is the StartUp project when it is shown in bold in the Solution Explorer. To set the active project, right-click on the project and select Set as StartUp project.

Data in the Cloud

Now that you have your solution set up for Azure, you can leverage your ASP.NET skills to develop your application.

As you are coding, you’ll want to consider the Azure model for making your application scalable. To handle additional traffic to your application, you increase the number of instances for each role. This means requests will be load-balanced across your roles, and that will affect how you design and implement your application.

In particular, it dictates how you access and store your data. Many familiar data storage and retrieval methods are not scalable, and therefore are not cloud-friendly. For example, storing data on the local file system shouldn’t be used in the cloud because it doesn’t scale.

To take advantage of the scaling nature of the cloud, you need to be aware of the new storage services. Azure Storage provides scalable blob, queue, and table storage services, and Microsoft SQL Azure provides a cloud-based relational database service built on SQL Server technologies. Blobs are used for storage of named files along with metadata. The queue service provides reliable storage and delivery of messages. The table service gives you structured storage, where a table is a set of entities that each contain a set of properties.

To help developers use these services, the Azure SDK ships with a Development Storage service that simulates the way these storage services run in the cloud. That is, developers can write their applications targeting the Development Storage services using the same APIs that target the cloud storage services.

Debugging

To demonstrate running and debugging on Azure locally, let’s use one of the samples from code.msdn.microsoft.com/windowsazuresamples. This MSDN Code Gallery page contains a number of code samples to help you get started with building scalable Web application and services that run on Azure. Download the samples for Visual Studio 2010, then extract all the files to an accessible location like your Documents folder.

The Development Fabric requires running in elevated mode, so start Visual Studio 2010 as an administrator. Then, navigate to where you extracted the samples and open the Thumbnails solution, a sample service that demonstrates the use of a Web role and a Worker role, as well as the use of the StorageClient library to interact with both the Queue and Blob services.

When you open the solution, you’ll notice three different projects. Thumbnails is the cloud service that associates two roles, Thumbnails_WebRole and Thumbnails_WorkerRole. Thumbnails_WebRole is the Web role project that provides a front-end application to the user to upload photos and adds a work item to the queue. Thumbnails_WorkerRole is the Worker role project that fetches the work item from the queue and creates thumbnails in the designated directory.

Add a breakpoint to the submitButton_Click method in the Default.aspx.cs file. This breakpoint will get hit when the user selects an image and clicks Submit on the page.

 
protected void submitButton_Click(
  object sender, EventArgs e) {
  if (upload.HasFile) {
    var name = string.Format("{0:10}", DateTime.Now.Ticks, 
      Guid.NewGuid());
    GetPhotoGalleryContainer().GetBlockBlob(name).UploadFromStream(upload.FileContent);

Now add a breakpoint in the Run method of the worker file, WorkerRole.cs, right after the code that tries to retrieve a message from the queue and checks if one actually exists. This breakpoint will get hit when the Web role puts a message in the queue that is retrieved by the worker.

 
while (true) {
  try {
    CloudQueueMessage msg = queue.GetMessage();
    if (msg != null) {
      string path = msg.AsString

To debug the application, go to the Debug menu and select Start Debugging. Visual Studio will build your project, start the Development Fabric, initialize the Development Storage (if run for the first time), package the deployment, attach to all role instances, and then launch the browser pointing to the Web role (see Figure 4).

image: Running the Thumbnails Sample

Figure 4 Running the Thumbnails Sample

At this point, you’ll see that the browser points to your Web role and that the notifications area of the taskbar shows the Development Fabric has started. The Development Fabric is a simulation environment that runs role instances on your machine in much the way they run in the real cloud.

Right-click on the Azure notification icon in the taskbar and click on Show Development Fabric UI. This will launch the Development Fabric application itself, which allows you to perform various operations on your deployments, such as viewing logs and restarting and deleting deployments (see Figure 5). Notice that the Development Fabric contains a new deployment that hosts one Web role instance and one Worker role instance.

image: The Development Fabric

Figure 5 The Development Fabric

Look at the processes that Visual Studio attached to (Debug/Windows/Processes); you’ll notice there are three: WaWebHost.exe, WaWorkerHost.exe and iexplore.exe.

WaWebHost (Azure Web instance Host) and WaWorkerHost (Azure Worker instance Host) host your Web role and Worker role instances, respectively. In the cloud, each instance is hosted in its own VM, whereas on the local development simulation each role instance is hosted in a separate process and Visual Studio attaches to all of them.

By default, Visual Studio attaches using the managed debugger. If you want to use another one, like the native debugger, pick it from the Properties of the corresponding role project. For Web role projects, the debugger option is located under the project properties Web tab. For Worker role projects, the option is under the project properties Debug tab.

By default, Visual Studio uses the script engine to attach to Internet Explorer. To debug Silverlight applications, you need to enable the Silverlight debugger from the Web role project Properties.

Browse to an image you’d like to upload and click Submit. Visual Studio stops at the breakpoint you set inside the submitButton_Click method, giving you all of the debugging features you’d expect from Visual Studio. Hit F5 to continue; the submitButton_Click method generates a unique name for the file, uploads the image stream to Blob storage, and adds a message on the queue that contains the file name.

Now you will see Visual Studio pause at the breakpoint set in the Worker role, which means the worker received a message from the queue and it is ready to process the image. Again, you have all of the debugging features you would expect.

Hit F5 to continue, the worker will get the file name from the message queue, retrieve the image stream from the Blob service, create a thumbnail image, and upload the new thumbnail image to the Blob service’s thumbnails directory, which will be shown by the Web role.

Next installment, we will look at the Deployment Process

New Office 365 API VS.Net Add-In exposes Javascript Client model

You can now access the Office 365 APIs using libraries available for .NET and JavaScript. These libraries make it easier to interact with the REST APIs from the device or platform of your choice.

 

Office365

The libraries are included in the latest update for Office 365 API Tools for Visual Studio Preview. Along with the libraries, this release also brings you some key updates to the tooling experience, making it easier to interact with Office 365 services.

Client libraries

Office 365 provides REST-based APIs that enable developers to access Office resources such as calendar, contacts, mail, files, and more.

The client libraries will let you:

  • Perform authentication and discovery
  • Use the Mail, Calendar and Contacts API
  • Use the My Files and Sites API (currently .NET only, with JavaScript coming soon)
  • Use the Users and Groups API

 

You can program directly against the REST APIs to interact with Office 365, but it requires you to write and maintain code around managing authentication tokens, constructing the right urls and queries for the API you wanted to access, and perform other tasks.

By using client libraries to access the Office 365 APIs, you can reduce the complexity of the code you need to write to access the APIs. We’re providing these libraries for .NET as well as JavaScript developers for use with the just-announced multi-device hybrid applications.

Here are some examples of how easy it is access the Office 365 APIs using these libraries.

.NET C# code to authenticate and get upcoming events from your Office 365 calendar:

// Shows UI to authenticate
Authenticator = newAuthenticator();
AuthenticationInfo result = await authenticator.AuthenticateAsync("https://outlook.office365.com");

The AuthenticateAsync method will prompt for a username and password and authenticate against the specified resource url, like outlook.office365.com in this case. Once you have the authentication information, you can create a client object that serves as the base for accessing all the APIs for Exchange:


// Create a client object
ExchangeClient client =
newExchangeClient(newUri("https://outlook.office365.com/ews/odata"),
result.GetAccessToken);

Because we’re using .NET here, we get to take advantage of the native language capabilities, like LINQ, so querying the Office 365 calendar is as simple as writing a LINQ query and executing it:

// Obtain calendar event data
var eventsResults = await (from i in client.Me.Events
where i.End >= DateTimeOffset.UtcNow
select i).Take(10).ExecuteAsync();

With just those four lines of code you can start making calls to the Office 365 APIs!

We wanted to make sure that you can reach multiple device and service platforms with a consistent API, so the client libraries are portable .NET libraries, which means they also work with Android and iOS devices through Xamarin. Because authentication needs to display a UI that is different on the various platforms, we also provide platform-specific authentication libraries, which can then be used with the portable ones to provide an end-to-end experience.

For developers creating multi-device hybrid applications that target multiple device platforms through JavaScript, we also have JavaScript versions of these libraries that provide a similar experience while adopting JavaScript’s patterns and practices, such as using the promises pattern instead of await.

 

Here is the same example to authenticate and get calendar events in JavaScript:

var authContext = new O365Auth.Context();
authContext.getIdToken('https://outlook.office365.com/')
.then((function (token) {
// authentication succeeded
var client = new Exchange.Client('https://outlook.office365.com/ews/odata',
token.getAccessTokenFn('https://outlook.office365.com'));
client.me.calendar.events.getEvents().fetch()
.then(function (events) {
// get currentPage of calendar events
var myevents = events.currentPage;
}, function (reason) {
// handle error
});
}).bind(this), function (reason) {
// authentication failed
});

The flow to authenticate and create a client object is similar across .NET and JavaScript, but you’re doing it in a way that should be natural to the language.

Along with the JavaScript files for these libraries, we are also including the TypeScript type definition (.d.ts)—in case you choose to develop your apps in TypeScript.

As you get started using these libraries, there are a few things to keep in mind. This is a very early preview release of the libraries that is meant to prove out the concept and get feedback on it. The libraries do not currently cover all the APIs provided by the services and some of the APIs in the library may not work. The APIs in the libraries themselves will definitely change in future updates.

Note that while we tend to call these “client” libraries, these also work with .NET server technologies like Asp.Net Web Forms and MVC, so you really get to target the breadth of the .NET platform.

 

Tooling updates

With today’s update of our Office 365 API Tools for Visual Studio 2013, the tool displays the available Office 365 services that you can add to your project. Once you’ve signed in with your Office 365 credentials, adding a service to your project is as easy as selecting the appropriate service and applying the required permissions.

dotnetvisualstudioupdate_01

Once you submit the changes, Visual Studio performs the following:

  1. Registers an application (if there isn’t an application registered yet) in Microsoft Azure Active Directory to consume Office 365 services.
  2. Adds the following to the project:
    1. Client libraries from Nuget for the configured services.
    2. Sample code files that use the Client Libraries.

Project types supported

With the broad reach of the client libraries, the Office 365 API tool is now available for a variety of project types (client, desktop, and web) in Visual Studio. Here’s are all the project types supported with the May update:

  • .NET Windows Store Apps
  • Windows Forms Application
  • WPF Application
  • ASP.NET MVC Web Application
  • ASP.NET Web Forms Application
  • Xamarin Android and iOS Applications
  • Multi-device hybrid apps

Installing the latest update

To install the latest update, you can either:

  • Check for updates within Visual Studio. To do so, follow these steps:
    1. In Visual Studio menu, click Tools->Extensions and Updates->Updates.
    2. You should see the update available for Office 365 API Tools.
    3. Click Update to update to the latest version.

–OR–

  • Download the extension and install it manually.

Once you’ve updated, you can invoke the Office 365 API tool as usual, that is, by going to your project node in the Solution Explorer and selecting Add->Connected Service from the context menu.

Looking forward to seeing your Apps out there when I visit the stores!!


MSDN references

Check also new SharePoint Online Solution Pack for branding and provisioning. This package contains also some examples, which originates from the AMS reference implementations. Here’s the direct links for the Solution Pack

You can find introduction to this SharePoint Online Solution Pack for branding and provisioning from following blog post – Introduction to SharePoint Online Solution Pack for branding and provisioning released.

Getting Started with Apps for Office : The Javascript API for Office

This section briefly describes the subset of the JavaScript API for Office you can call from content and task pane apps. See Understanding the JavaScript API for Office for an overview of the features of the entire API, and Apps for Office code samples for additional examples.

Before reading this section, use the links below to explore API diagrams that show the members of the API supported in content and task pane apps and the Office host applications that support these app types.

Explore by app type: Explore by host application:
Zoom into the Office object model for content apps Content apps

ZoomIt

Zoom into the object model for task pane apps Task pane apps

ZoomIt

Download the set of maps

for each app type and host application.

Zoom into the app object model for Excel Excel

ZoomIt

Zoom into the app object model for PowerPoint PowerPoint

ZoomIt

Zoom into the app object model for Project Project

ZoomIt

Zoom into the app object model for Word Word

ZoomIt

You can categorize the primary objects and methods supported by content and task pane apps as follows:

  1. Common objects shared with other apps for Office

    These objects include Office, Context, and AsyncResult. The Office object is the root object of the JavaScript API for Office. The Context object represents the app’s runtime environment. Both Office and Context are the fundamental objects for any app for Office. The AsyncResult object represents the results of an asynchronous operation, such as the data returned to the getSelectedDataAsync method, which reads what a user has selected in a document.

  2. The Document object

    The majority of the API available to content and task pane apps is exposed through the methods, properties, and events of the Document object. Using this subset of the API, your content or task pane app can perform the tasks described later in this topic.

    A content or task pane app can use the Office.context.document property to access the Document object, and through it, can access the key members of the API for working with data in documents, such as the Bindings and CustomXmlParts objects, and the getSelectedDataAsync, setSelectedDataAsync, and getFileAsync methods. The Document object also provides the mode property for determining whether a document is read-only or in edit mode, the url property to get the URL of the current document, and access to the Settings object. The Document object also supports adding event handlers for the SelectionChanged event, so you can detect when a user changes his or her selection in the document.

    A content or task pane app can access the Document object only after the DOM and run-time environment has been loaded, typically in the event handler for the Office.initialize event. For information about the flow of events when an app is initialized, and how to check that the DOM and runtime and loaded successfully, see Loading the DOM and runtime environment.

  3. Objects for working with specific features

    To work with specific features of the API, your content or task pane app can work with the following objects and methods:

    • Use the methods of the Bindings object to create or get bindings, and then work with their data using the methods and properties of the Binding object.
    • Use the CustomXmlParts, CustomXmlPart and associated objects to create and manipulate custom XML parts in Word documents.
    • Use the File and Slice objects to create a copy of the entire document, break it into chunks or “slices”, and then read or transmit the data in those slices.
    • Use the Settings object to save custom data, such as user preferences, and app state.

Important: Some of the API members described in this topic aren’t supported across all Office applications that can host content and task pane apps. To determine which members are supported, see any of the following resources:

For a high-level summary of the JavaScript API for Office support available across Office host applications, see the API support matrix in the “Understanding the JavaScript API for Office” topic.

The following sections highlight the fundamental concepts for creating content and task pane apps for Word, Excel, PowerPoint, and Project. For more details about a concept, see the references at the end of the concept, and also the Additional resources section.

You can read or write to the user’s current selection in a document, spreadsheet, or presentation. Depending on the host application for your app, you can specify the type of data structure to read or write as a parameter in the getSelectedDataAsync and setSelectedDataAsync methods of the Document object. For example, you can specify any type of data (text, HTML, tabular data, or Office Open XML) for Word, text and tabular data for Excel, and text for PowerPoint and Project. You can also create event handlers to detect changes to the user’s selection. The following example gets data from the selection as text using the getSelectedDataAsync method.

Copy
Office.context.document.getSelectedDataAsync(
    Office.CoercionType.Text, function (asyncResult) {
        if (asyncResult.status == Office.AsyncResultStatus.Failed) {
            write('Action failed. Error: ' + asyncResult.error.message);
        }
        else {
            write('Selected data: ' + asyncResult.value);
        }
    });

// Function that writes to a div with id='message' on the page.
function write(message){
    document.getElementById('message').innerText += message; 
}

For more details and examples, see Reading and writing data to the active selection in a document or spreadsheet.

As described in the previous section, you can use the getSelectedDataAsync and setSelectedDataAsync methods to read or write to the user’s current selection in a document, spreadsheet, or presentation. However, if you would like to access the same region in a document across sessions of running your app without requiring the user to make a selection, you should first bind to that region. You can also subscribe to data and selection change events for that bound region.

You can add a binding by using addFromNamedItemAsync, addFromPromptAsync, or addFromSelectionAsync methods of the Bindings object. These methods return an identifier that you can use to access data in the binding, or to subscribe to its data change or selection change events.

The following is an example that adds a binding to the currently selected text in a document, by using the Bindings.addFromSelectionAsync method.

Copy
Office.context.document.bindings.addFromSelectionAsync(
    Office.BindingType.Text, { id: 'myBinding' }, function (asyncResult) {
    if (asyncResult.status == Office.AsyncResultStatus.Failed) {
        write('Action failed. Error: ' + asyncResult.error.message);
    } else {
        write('Added new binding with type: ' +
            asyncResult.value.type + ' and id: ' + asyncResult.value.id);
    }
});

// Function that writes to a div with id='message' on the page.
function write(message){
    document.getElementById('message').innerText += message; 
}

For more details and examples, see Binding to regions in a document or spreadsheet.

If your task pane app runs in PowerPoint or Word, you can use the Document.getFileAsync, File.getSliceAsync, and File.closeAsync methods to get an entire presentation or document.

When you call Document.getFileAsync, you get a copy of the document in a File object. The File object provides access to the document in “chunks” represented as Slice objects. When you call getFileAsync, you can specify the file type (text or compressed Open Office XML format), and size of the slices (up to 4MB). To access the contents of the File object, you then call File.getSliceAsync which returns the raw data in the Slice.data property. If you specified compressed format, you will get the file data as a byte array. If you are transmitting the file to a web service, you can transform the compressed raw data to a base64-encoded string before submission. Finally, when you are finished getting slices of the file, use the File.closeAsync method to close the document.

For more details, see how to get the whole document from an app for PowerPoint or Word.

Using the Open Office XML file format and content controls, you can add custom XML parts to a Word document and bind elements in the XML parts to content controls in that document. When you open the document, Word reads and automatically populates bound content controls with data from the custom XML parts. Users can also write data into the content controls, and when the user saves the document, the data in the controls will be saved to the bound XML parts. Task pane apps for Word, can use the Document.customXmlParts property, CustomXmlParts, CustomXmlPart, and CustomXmlNode objects to read and write data dynamically to the document.

Custom XML parts may be associated with namespaces. To get data from custom XML parts in a namespace, use the CustomXmlParts.getByNamespaceAsync method.

You can also use the CustomXmlParts.getByIdAsync method to access custom XML parts by their GUIDs. After getting a custom XML part, use the CustomXmlPart.getXmlAsync method to get the XML data.

To add a new custom XML part to a document, use the Document.customXmlParts property to get the custom XML parts that are in the document, and call the CustomXmlParts.addAsync method.

For detailed information about how to work with custom XML parts with a task pane app, see Creating Better Apps for Word with Office Open XML.

Often you need to save custom data for your app, such as a user’s preferences or the app’s state, and access that data the next time the app is opened. You can use common web programming techniques to save that data, such as browser cookies or HTML 5 web storage. Alternatively, if your app runs in Excel, PowerPoint, or Word, you can use the methods of the Document.Settings object. Data created with the Settings object is stored in the spreadsheet, presentation, or document that the app was inserted into and saved with. This data is available to only the app that created it.

To avoid roundtrips to the server where the document is stored, data created with the Settings object is managed in memory at runtime. Previously saved settings data is loaded into memory when the app is initialized, and changes to that data are only saved back to the document when you call the Settings.saveAsync method. Internally, the data is stored in a serialized JSON object as name/value pairs. You use the get, set, and remove methods of the Settings object, to read, write, and delete items from the in-memory copy of the data. The following line of code shows how to create a setting named themeColor and set its value to ‘green’.

Copy
Office.context.document.settings.set('themeColor', 'green');

Because settings data created or deleted with the set and remove methods is acting on an in-memory copy of the data, you must call saveAsync to persist changes to settings data into the document your app is working with.

For more details about working with custom data using the methods of the Settings object, see Persisting app state and settings.

If your task pane app runs in Project, your app can read data from some of the project fields, resource, and task fields in the active project. To do that, you use the methods and events of the ProjectDocument object which extends the Document object to provide additional Project-specific functionality.

For examples of reading Project data, see How to: Create your first task pane app for Project 2013 by using a text editor

Your app uses the Permissions element in its manifest to request permission to access the level of functionality it requires from the JavaScript API for Office. For example, if your app requires read/write access to the document, its manifest must specify ReadWriteDocument as the text value in its Permissions element. Because permissions exist to protect a user’s privacy and security, as a best practice you should request the minimum level of permissions it needs for its features. The following example shows how to request the ReadDocument permission in a task pane’s manifest.

Copy
<!--?xml version="1.0" encoding="utf-8"?>

…
  ReadDocument
…

Figure 1 shows the 5 levels of permissions that you can specify for a task pane app. For more information, see Requesting permissions for task pane apps.

Figure 1. The 5-level permission model for task pane apps

Levels of permissions for task pane apps

Figure 2 shows the 4 levels of permissions available to a content app. For more information, see Requesting permissions for content apps.

Figure 2. The 4-level permission model for content apps

Levels of permissions for content apps