Category Archives: Powershell

How To : Run SAP Applications on the Microsoft Platform

SAP on SQL General Update for Customers & Partners April 2014

SAP and Microsoft are continuously adding new features and functionalities to the SAP on SQL Server platform. The key objective of the SAP on Windows SQL port is to deliver the best performance and availability at the lowest TCO. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.

sap2[1]SQL+Server+2014+Evolution[1]

Key topics in this blog: SQL Server 2014 First Customer Shipment program now available, Intel release new powerful E7 v2 processors, Windows 2012 R2 is now GA and customers are recommended to upgrade SQL Server 2012 CU to avoid a bug that can impact performance.

1.        SQL Server 2014 Release & Integration with SAP NetWeaver

SQL Server 2014 is now released and publically available. SAP are proceeding with the certification and integration of SQL Server 2014 with NetWeaver applications and other applications such as Business Objects.

Customers planning to implement SQL Server 2014 are advised to target upgrading to the following support pack stacks. Customers who wish to start projects on SQL Server 2014 or if early production support on SQL Server 2014 is required please follow the instructions in Note 1966681 – Release planning for Microsoft SQL Server 2014

Required SAP ABAP NetWeaver Support Package Stacks for SQL Server 2014

SAP SOFTWARE SUPPORT PACKAGE STACK (SPS)
SAP NETWEAVER 7.0 SPS 29, for SAP BW: SPS 30 (7.0 SPS 30 contains SAP_BASIS SP 30 and SAP_BW SP 32)
SAP EHP1 FOR SAP NETWEAVER 7.0 SPS 15, for SAP BW: SPS 15
SAP EHP2 FOR SAP NETWEAVER 7.0 SPS 14, for SAP BW: SPS 15
SAP EHP3 FOR SAP NETWEAVER 7.0 SPS 9
SAP NETWEAVER 7.1 SPS17
SAP EHP1 FOR SAP NETWEAVER 7.1 SPS 12, for SAP BW: SPS13
SAP NETWEAVER 7.3 SPS 10, for SAP BW: SPS11
SAP EHP1 FOR SAP NETWEAVER 7.3 SPS 9, for SAP BW: SPS11
SAP NETWEAVER 7.4 SPS 4, for SAP BW: SPS06

Java only systems do in general not require a dedicated support package stack for SQL Server 2014.

Note 1966701 – Setting up Microsoft SQL Server 2014

Note 1966681 – Release planning for Microsoft SQL Server 2014

New features in SQL Server 2014 are documented in a five part blog series here

Also see key new SQL Server Engine Features and SQL Server Managed Backups to Azure. Free trial Azure accounts can be created for backup testing. To test latency to the nearest Azure Region use this tool http://azurespeedtest.azurewebsites.net/

2.        New Enhancements for Performance and Functionality for SAP BW Systems

SAP BW customers can reduce database size, improve query performance and improve load/process chain performance by implementing SAP Support Packs, Notes and SQL Server Column Store.

All SAP on BW customers are strongly recommended to review these blogs:

SQL Server Column-Store: Updated SAP BW code

Optimizing BW Query Performance

Increasing BW cube compression performance

SQL Server Column-Store with SAP BW Aggregates

Performance of index creation

It is recommended to upgrade SAP BW systems to SQL Server 2012 or SQL Server 2014 and implement Column Store. Customer experiences so far have shown dramatic space savings of 20-45% and very good performance improvements.

BW customers with poor cube compression performance are highly encouraged to implement the fixes in Increasing BW cube compression performance

3.        SAP Note 1612283 Updated – IvyBridge EX E7 v2 Processor Released

Intel has released a new processor for 4 socket and 8 socket servers. The Intel E7 v2 IvyBridge EX processors almost doubles the performance compared to the previous Westmere EX E7 processor.

Even the largest SAP systems in the world are able to run on standard low cost Intel based hardware. Intel based systems are in fact considerably more powerful than high cost proprietary UNIX systems such as IBM pSeries. In addition to performance enhancements many reliability features have been added onto modern servers and built into the Windows operating system to allow Windows customers to meet the same service levels on Wintel servers that previously required high cost proprietary hardware.

  1. 4 socket HP DL580 Intel E7 v2 = 133,570 SAPSor 24,450 users
  2. 4 socket IBM pSeries Power7+ = 68,380 SAPS or 12,528 users

  3. 8 socket Fujitsu Intel E7 v2 = 259,680 SAPS or 47,500 users

Source: SAP SD 2 Tier Benchmarks http://global.sap.com/solutions/benchmark/sd2tier.epx   See end of blog for full disclosure

SAP Note 1612283 – Hardware Configuration Standards and Guidance has been updated to include recommendations for new Intel E5 v2, E7 v2 and AMD equivalent based systems. Customers are advised to ensure that new hardware is purchased with sufficient RAM as per the guidance in this SAP Note.

Most SAP Systems exhibit a ratio of Database SAPS to SAP application server SAPS of about 20% for DB and 80% for SAP application server. An 8 socket Intel server can deliver more than 200,000 SAPS, meaning one SAP on SQL Server system can deliver more than 1,000,000 SAPS. There are few if any single SAP systems in the world that are more than 1,000,000 SAPS, therefore these powerful platforms are recommended as consolidation platforms. This is explained in attachment to SAP Note 1612283. Many SQL databases and SAP systems can be consolidated onto a single powerful Windows & SQL Server infrastructure.

4.        SQL Server 2012 Cumulative Update Recommended

SQL Server 2012 Service Pack 1 CU3, CU4, CU5 & CU6 contains a bug that can impact performance on SAP systems.

It is strongly recommended to update to SQL Server 2012 Service Pack 1 CU 9 (the bug is fixed in CU7 & CU8 as well).

Microsoft will release a Service Pack 2 for SQL Server 2012 in the future

SQL Service Packs and Cumulative Updates can be found here: http://blogs.msdn.com/b/sqlreleaseservices/

Bug is documented here http://support.microsoft.com/kb/2895494

5.        Windows Server 2012 R2 – Shared Virtual Hard Disk for SAP ASCS Cluster

Windows Server 2012 R2 is now Generally Available for most NetWeaver applications and includes new features for Hyper-V Virtualization.

Feedback from customers has indicated that the vHBA feature offered in Windows 2012 requires that the OEM device drivers and firmware on the HBA card be up to date.

If the device drivers and firmware are not up to date, the vHBA can hang.

The SAP Central Services Highly Available cluster requires a shared disk to hold the /SAPMNT share. Therefore a shared disk is required inside a Hyper-V Virtual Machine.

There are now three options:

  1. iSCSI – generally only for small customers
  • vHBA – suitable for customers of any size, but driver/firmware must be up to date

  • Shared Virtual Hard Disk – available now in Windows 2012 R2. Simple to setup and configure RECOMMENDED

  • Windows 2012 R2 offers this feature and it is generally recommended for customers wanting to create guest clusters on Hyper-V to protect the SAP Central Services.

    SQL Server can also utilize Shared Virtual Hard Disk, however we generally recommend using SQL Server 2012 AlwaysOn for providing high availability

    Aidan Finn provides a useful blog on configuring Shared Virtual Hard Disk

    For more information about SAP on Hyper-V see this blog series How to Deploy SAP on Microsoft Private Cloud with Hyper-V 3.0 and SAP Note 1409608 – Virtualization on Windows

    6.        Important Notes for SAP BW Migration Customers

    Customers migrating SAP BW systems using R3load must pay particular attention to the SAP System Copy Notes and the supplementary SAP Note 888210 – NW 7.**: System copy (supplementary note)

    SAP BW and some other SAP components have special properties on some tables. These special properties are defined in DBMS specific definition files generated by SMIGR_CREATE_DDL.

    Prior to exporting a SAP BW system SMIGR_CREATE_DDL must be run. There are some important updates for the program SMIGR_CREATE_DDL that must be applied in the source system before the export. SAP Note 888210 will list all required notes. BW Systems running very old support packs must be checked very carefully and possibly other notes should be applied. The following notes should be implemented:

    1901705 – Long import runtime with certain tables on MSSQL

    1747330 – Missing data base indexes after system copy to MSSQL

    1993315 – SMIGR_CREATE_DDL: double columns in create index statements

    1771177 – SQL Server 2012 column-store support for SAP BW

    Customers migrating to SQL Server should review this blog: SAP OS/DB Migration to SQL Server–FAQ v5.2 April 2014

    7.        SQL Server AlwaysOn – Parameter AlwaysOn HealthCheckTimeout & LeaseTimeout. What are these values?

    SQL Server AlwaysOn leverages Windows Server Failover Cluster (WSFC) technology to determine resource health, quorum and control the status of a SQL Server Availability Group. The WSFC resource DLL of the availability group performs a health check of the primary replica by calling the sp_server_diagnostics stored procedure on the instance of SQL Server that hosts the primary replica. sp_server_diagnostics returns results at an interval that equals 1/3 of the health-check timeout threshold for the availability group. The default health-check timeout threshold is 30 seconds, which causes sp_server_diagnostics to return at a 10-second interval. If sp_server_diagnostics is slow or is not returning information, the resource DLL will wait for the full interval of the health-check timeout threshold before determining that the primary replica is unresponsive. If the primary replica is unresponsive or if the sp_server_diagnostics returns a failure level equal to or in excess of the configured failure level, an automatic failover is initiated.

    In addition to the above there is a further layer of logic to prevent another scenario:

    1. SQL Server Primary replica becomes extremely busy, so busy the operating system or SQL Server is saturated and cannot reply to the WSFC resource DLL within the configured period of time (default 30 seconds)
  • Windows Failover Cluster tries to stop SQL Server on the busy node, but is unable to communicate as the server is so busy. WSFC will assume the node has become isolated from the network and will start the failover process to and start SQL Server on another node. SQL Server is now running on another host

  • Eventually the condition causing the original host to be extremely busy finishes and client connections might continue to process on the first node (a very bad thing because we now have two “Primaries”)

  • To prevent this the Primary Node must connect to the WSFC resource DLL and obtain a lease periodically. This is controlled by the parameter LeaseTimeout. The Primary AlwaysOn Node must renew this lease otherwise the Primary will offline the database.

    Therefore there are two important parameters – HealthCheckTimeout and LeaseTimeout.

    Some customers have encountered problems with unexplained AlwaysOn failovers during activities such as initializing new Log Shipping or AlwaysOn Nodes via network or network backup while SQL Server is very busy. It is strongly recommended to use good quality 10G network cards, run Windows 2012 or Windows 2012 R2 and avoid using 3rd party network teaming utilities like HP Network Configuration Utility (NCU). In rare cases increasing both of these parameters may be needed.

    Additional Blog: SQL Server 2012 AlwaysOn – Part 7 – Details behind an AlwaysOn Availability Group

     

    8.        FusionIO Format Settings

    FusionIO or other in server SSD devices are now very common and are strongly recommended for customers that require high performance SQL Server infrastructure. The use of FusionIO and SSD is further recommended and detailed in Note 1612283 – Hardware Configuration Standards and Guidance and Infrastructure Recommendations for SAP on SQL Server: “in-memory” Performance.

    FusionIO devices are usually used for holding SQL Server transaction logs and tempdb. If the transaction log and tempdb datafiles and log files are placed on FusionIO we recommend:

    1. FusionIO card is formatted for maximum WRITE. This will reduce the usable space significantly
  • FusionIO physical level format should be 4k (not 8k – it is a proprietary format size)

  • Make sure server BIOS is set for MAX Fan blow out – FusionIO will slow down if it becomes hot (FusionIO device will throttle IO if temperature increases)

  • Update FusionIO and Server BIOS to latest

  • Format Windows disks NTFS Allocation Unit Size 64k

  • 9.        VHDX/VHD Format Settings

    Windows NTFS File System Allocation Unit Size default size is 4096 bytes. Smaller Allocation Unit Sizes (AUS) is more efficient for storing many small files. Larger AUS sizes such as 64k are more efficient for larger files.

    The file system holding the VHDX files for SQL Server virtual machines running on Hyper-V may benefit from a 64 kilobyte NTFS AUS size.

    The NTFS AUS of the file system inside the VHDX file must be 64 kilobytes

    The AUS size can be checked with the command below:

    fsutil fsinfo ntfsinfo <Drive letter>:

    10.        Do Not Install SAPGUI on SAP Servers

    Windows Servers have the ability to run many desktop PC applications such as SAPGUI and Internet Explorer however it is strongly recommended not to install this software on SAP servers, particularly production servers.

    1. To improve reliability of an operating system it is recommended to install as few software packages as possible. This will not only improve reliability and performance, but will also make debugging any issues considerably simpler
  • SAPGUI is in practice almost impossible to remove completely. SAPGUI installation installs DLLs into Windows directory

  • “A server is a server, a PC is a PC”. Customers are encouraged to restrict access to production servers by implementing Server Hardening Procedure. SAP Servers should not be used as administration consoles and there should be no need to directly connect to a server. Almost all administration can be done remotely

  •  

     

    Links

    How It Works: SQL Server AlwaysOn Lease Timeout

    http://blogs.msdn.com/b/psssql/archive/2012/09/07/how-it-works-sql-server-alwayson-lease-timeout.aspx

    Flexible Failover Policy for Automatic Failover of an Availability Group (SQL Server)

    http://technet.microsoft.com/en-us/library/hh710061.aspx

    Configure HealthCheckTimeout Property Settings

    http://technet.microsoft.com/en-us/library/ff878665.aspx

    Advertisements

    How To : Use Powershell and TFS together

    The absolute basics

    Where does a newbie to Windows PowerShell start—particularly in regards to TFS? There are a few obvious places. I’m hardly the first person to trip across the natural peanut-butter-and-chocolate nature of TFS and Windows PowerShell together. In fact, the TFS Power Tools contain a set of cmdlets for version control and a few other functions.

    Image

    There is one issue when downloading them, however. The “typical” installation of the Power Tools leaves out the Windows PowerShell cmdlets! So make sure you choose “custom” and select those Windows PowerShell cmdlets manually.

    After they’re installed, you also might need to manually add them to Windows PowerShell before you can start using them. If you try Get-Help for one of the cmdlets and see nothing but an error message, you know you’ll need to do so (and not simply use Update-Help, as the error message implies).

    Fortunately, that’s simple. Using the following command will fix the issue:

    add-pssnapin Microsoft.TeamFoundation.PowerShell

    See the before and after:

    Image of command output

    A better way to review what’s in the Power Tools and to get the full list of cmdlets installed by the TFS Power Tools is to use:

    Get-Command -module Microsoft.TeamFoundation.PowerShell

    This method doesn’t depend on the developers including “TFS” in all the cmdlet names. But as it happens, they did follow the Cmdlet Development Guidelines, so both commands return the same results.

    Something else I realized when working with the TFS PowerShell cmdlets: for administrative tasks, like those I’m most interested in, you’ll want to launch Windows PowerShell as an administrator. And as long-time Windows PowerShell users already know, if you want to enable the execution of remote scripts, make sure that you set your script execution policy to RemoteSigned. For more information, see How Can I Write and Run a Windows PowerShell Script?.

    Of all the cmdlets provided with the TFS Power Tools, one of my personal favorites is Get-TfsServer, which lets me get the instance ID of my server, among other useful things.  My least favorite thing about the cmdlets in the Power Tools? There is little to no useful information for TFS cmdlets in Get-Help. Awkward! (There’s a community bug about this if you want to add your comments or vote on it.)

    A different favorite: Get-TFSItemHistory. His following example not only demonstrates the power of the cmdlets, but also some of their limitations:

    Get-TfsItemHistory -HistoryItem . -Recurse -Stopafter 5 |

        ForEach-Object { Get-TfsChangeset -ChangesetNumber $_.ChangesetId } |

        Select-Object -ExpandProperty Changes |

        Select-Object -ExpandProperty Item

    This snippet gets the last five changesets in or under the current directory, and then it gets the list of files that were changed in those changesets. Sadly, this example also highlights one of the shortcomings of the Power Tools cmdlets: Get-TfsItemHistory cannot be directly piped to Get-TfsChangeset because the former outputs objects with ChangesetId properties, and the latter expects a ChangesetNumber parameter.

    One of the nice things is that raw TFS API objects are being returned, and the snap-ins define custom Windows PowerShell formatting rules for these objects. In the previous example, the objects are instances of VersionControl.Client.Item, but the formatting approximates that seen with Get-ChildItem.

    So the cmdlets included in the TFS Power Tools are a good place to start if you’re just getting started with TFS and Windows PowerShell, but they’re somewhat limited in scope. Most of them are simply piping results of the tf.exe commands that are already available in TFS. You’ll probably find yourself wanting to do more than just work with these.

     

    Resource – Office 365 Powershell Commandlets

    Before you can start working with the SharePoint Online cmdlets you must first download those cmdlets. Having the cmdlets as a separate download (separate from SharePoint on-premises that is) allows you to use any machine to run the cmdlets.

    blog-office365

     

    All we have to do is make sure we have PowerShell V3 installed along with the .NET Framework v4 or better (required by PowerShell V3). With these prerequisites in place simply download and install the cmdlets from Microsoft: http://www.microsoft.com/en-us/download/details.aspx?id=35588.

    Once installed open the SharePoint Online Management Shell by clicking Start > All Programs > SharePoint Online Management Shell > SharePoint Online Management Shell.

    Just like with the SharePoint Management Shell for on-premises deployments the SharePoint Online Management Shell is just a standard PowerShell window. You can see this by looking at the target attribute of the shortcut properties:

    C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoExit -Command “Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking;”

    As you can see from the shortcut, a PowerShell module is loaded: Microsoft.Online.SharePoint.PowerShell. Unlike with SharePoint on-premises, this is not a snap-in but a module, which is basically the new, better way of loading cmdlets. The nice thing about this is that, like with the snap-in, you can load the module in any PowerShell window and are not limited to using the SharePoint Online Management Shell.

    (The -DisableNameChecking parameter of the Import-Module cmdlet simply tells PowerShell to not bother checking for valid verbs used by the loaded cmdlets and avoids warnings that are generated by the fact that the module does use an invalid verb – specifically, Upgrade). Note that unlike with the snap-in, there’s no need to specify the threading options because the cmdlets don’t use any unmanaged resources which need disposal.

    Getting Connected

    Now that you’ve got the SharePoint Online Management Shell installed you are now ready to connect to your tenant administration site. This initial connection is necessary to establish a connection context which stores the URL of the tenant administration site and the credentials used to connect to the site. To establish the connection use the Connect-SPOService cmdlet:

    Connect-SPOService -Url https://contoso-admin.sharepoint.com -Credential gary@contoso.com

     

    Running this cmdlet basically just stores a Microsoft.SharePoint.Client.ClientContext object in an internal static variable (or a sub-classed version of it at least). Future cmdlet calls then use this object to connect to the site, thereby negating the need to constantly provide the URL and credentials. (The downside of this object being internal is that we can’t extend the cmdlets to add our own, unless we want to use reflection which would be unsupported). To clear this internal variable (and make the session secure against other code that may attempt to use it) you can run the Disconnect-SPOService cmdlet. This cmdlet takes no parameters.

    One tip to help make loading the module and then connecting to the site a tad bit easier would be to encapsulate the commands into a single helper method. In the following example I created a simple helper method named Connect-SPOSite which takes in the user and the tenant administration site to connect to, however, I default those values so that I only have to provide the password when I wish to get connected. I then put this method in my profile file (which you can edit by typing “ise $profile.CurrentUsersAllHosts”):

    function Connect-SPOSite() {

        param (

            $user = “gary@contoso.com”,

            $site = https://contoso-admin.sharepoint.com&#8221;

        )

        if ((Get-Module Microsoft.Online.SharePoint.PowerShell).Count -eq 0) {

            Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking

        }

        $cred = Get-Credential $user

        Connect-SPOService -Url $site -Credential $cred

    }

     

    SPO Cmdlets

    Now that you’re connected you can finally do something interesting. First let’s look at the cmdlets that are available. There are currently only 30 cmdlets available to us and you can see the list of those cmdlets by typing “Get-Command -Module Microsoft.Online.SharePoint.PowerShell”. Note that all of the cmdlets will have a noun which starts with “SPO”. The following is a list of all the available cmdlets:

    • Site Groups
    • Users
      • Add-SPOUser – Add a user to an existing Site Collection Site Group.
      • Get-SPOUser – Get an existing user.
      • Remove-SPOUser – Remove an existing user from the Site Collection or from an existing Site Collection Group.
      • Set-SPOUser – Set whether an existing Site Collection user is a Site Collection administrator or not.
      • Get-SPOExternalUser – Returns external users from the tenant’s folder.
      • Remove-SPOExternalUser – Removes a collection of external users from the tenancy’s folder.
    • Site Collections
      • Get-SPOSite – Retrieve an existing Site Collection.
      • New-SPOSite – Create a new Site Collection.
      • Remove-SPOSite – Move an existing Site Collection to the recycle bin.
      • Repair-SPOSite – If any failed Site Collection scoped health check rules can perform an automatic repair then initiate the repair.
      • Set-SPOSite – Set the Owner, Title, Storage Quota, Storage Quota Warning Level, Resource Quota, Resource Quota Warning Level, Locale ID, and/or whether the Site Collection allows Self Service Upgrade.
      • Test-SPOSite – Run all Site Collection health check rules against the specified Site Collection.
      • Upgrade-SPOSite – Upgrade the Site Collection. This can do a build-to-build (e.g., RTM to SP1) upgrade or a version-to-version (e.g., 2010 to 2013) upgrade. Use the -VersionUpgrade parameter for a version-to-version upgrade.
      • Get-SPODeletedSite – Get a Site Collection from the recycle bin.
      • Remove-SPODeletedSite – Remove a Site Collection from the recycle bin (permanently deletes it).
      • Restore-SPODeletedSite – Restores an item from the recycle bin.
      • Request-SPOUpgradeEvaluationSite  – Creates a copy of the specified Site Collection and performs an upgrade on that copy.
      • Get-SPOWebTemplate – Get a list of all available web templates.
    • Tenants
      • Get-SPOTenant – Retrieves information about the subscription tenant. This includes the Storage Quota size, Storage Quota Allocated (used), Resource Quota size, Resource Quota Allocated (used), Compatibility Range (14-14, 14-15, or 15-15), whether External Services are enabled, and the No Access Redirect URL.
      • Get-SPOTenantLogEntry – Retrieves company logs (as of B2 only BCS logs are available).
      • Get-SPOTenantLogLastAvailableTimeInUtc – Returns the time when the logs are collected.
      • Set-SPOTenant – Sets the Minimum and Maximum Compatibility Level, whether External Services are enabled, and the No Access Redirect URL.
    • Apps
    • Connections

    It’s important to understand that when working with all of the cmdlets which retrieve an object you will only ever be getting a simple data object which has no ability to act upon the source object. For example, the Get-SPOSite cmdlet returns an SPOSite object which has no methods and, though some properties do have a setter, they are completely useless and the object and its properties are not used by any other cmdlet (such as Set-SPOSite). This also means that there is no ability to access child objects (such as SPWeb or SPList items, to name just a couple).

    The other thing to note is the lack of cmdlets for items at a lower scope than the Site Collection. Specifically there is no Get-SPOWeb or Get-SPOList cmdlet or anything of the sort. This can be potentially be quite limiting for most real world uses of PowerShell and, in my opinion, limit the usefulness of these new cmdlets to just the initial setup of a subscription and not the long-term maintenance of the subscription.

    In the following examples I’ll walk through some examples of just a few of the more common cmdlets so that you can get an idea of the general usage of them.

    Get a Site Collection

    To see the list of Site Collections associated with a subscription or to see the details for a specific Site Collection use the Get-SPOSite cmdlet. This cmdlet has two parameter sets:

    Get-SPOSite [[-Identity] <SpoSitePipeBind>] [-Limit <string>] [-Detailed] [<CommonParameters>]

    Get-SPOSite [-Filter <string>] [-Limit <string>] [-Detailed] [<CommonParameters>]

    The parameter that you’ll want to pay the most attention to is the -Detailed parameter. If this optional switch parameter is omitted then the SPOSite objects that will be returned will only have their properties partially set. Now you might think that this is in order to reduce the traffic between the server and the client, however, all the properties are still sent over the wire, they simply have default values for everything other than a couple core properties (so I would assume the only performance improvement would be in the query on the server). You can see the difference in the values that are returned by looking at a Site Collection with and without the details:

    PS C:\> Get-SPOSite https://contoso.sharepoint.com/ | select *

    LastContentModifiedDate   : 1/1/0001 12:00:00 AM
    Status                    : Active
    ResourceUsageCurrent      : 0
    ResourceUsageAverage      : 0
    StorageUsageCurrent       : 0
    LockIssue                 :
    WebsCount                 : 0
    CompatibilityLevel        : 0
    Url                       :
    https://contoso.sharepoint.com/
    LocaleId                  : 1033
    LockState                 : Unlock
    Owner                     :
    StorageQuota              : 1000
    StorageQuotaWarningLevel  : 0
    ResourceQuota             : 300
    ResourceQuotaWarningLevel : 255
    Template                  : EHS#1
    Title                     :
    AllowSelfServiceUpgrade   : False

    PS C:\> Get-SPOSite https://contoso.sharepoint.com/ -Detailed | select *

    LastContentModifiedDate   : 11/2/2012 4:58:50 AM
    Status                    : Active
    ResourceUsageCurrent      : 0
    ResourceUsageAverage      : 0
    StorageUsageCurrent       : 1
    LockIssue                 :
    WebsCount                 : 1
    CompatibilityLevel        : 15
    Url                       :
    https://contoso.sharepoint.com/
    LocaleId                  : 1033
    LockState                 : Unlock
    Owner                     : s-1-5-21-3176901541-3072848581-1985638908-189897
    StorageQuota              : 1000
    StorageQuotaWarningLevel  : 0
    ResourceQuota             : 300
    ResourceQuotaWarningLevel : 255
    Template                  : STS#0
    Title                     : Contoso Team Site
    AllowSelfServiceUpgrade   : True

    Create a Site Collection

    When we’re ready to create a Site Collection we can use the New-SPOSite cmdlet. This cmdlet is very similar to the New-SPSite cmdlet that we have for on-premises deployments. The following shows the syntax for the cmdlet:

    New-SPOSite [-Url] <UrlCmdletPipeBind> -Owner <string> -StorageQuota <long> [-Title <string>] [-Template <string>] [-LocaleId <uint32>] [-CompatibilityLevel <int>] [-ResourceQuota <double>] [-TimeZoneId <int>] [-NoWait] [<CommonParameters>]

    The following example demonstrates how we would call the cmdlet to create a new Site Collection called “Test”:

    New-SPOSite -Url https://contoso.sharepoint.com/sites/Test -Title “Test” -Owner “gary@contoso.com” -Template “STS#0” -TimeZoneId 10 -StorageQuota 100

     

    Note that the cmdlet also takes in a -NoWait parameter; this parameter tells the cmdlet to return immediately and not wait for the creation of the Site Collection to complete. If not specified then the cmdlet will poll the environment until it indicates that the Site Collection has been created. Using the -NoWait parameter is useful, however, when creating batches of Site Collections thereby allowing the operations to run asynchronously.

    One issue you might bump into is in knowing which templates are available for your use. In the preceding example we are using the “STS#0” template, however, there are other templates available for our use and we can discover them using the Get-SPOWebTemplate cmdlet, as shown below:

    PS C:\> Get-SPOWebTemplate

    Name                     Title                         LocaleId  CompatibilityLevel
    —-                     —–                         ——–  ——————
    STS#0                    Team Site                         1033                  15
    BLOG#0                   Blog                              1033                  15
    BDR#0                    Document Center                   1033                  15
    DEV#0                    Developer Site                    1033                  15
    DOCMARKETPLACESITE#0     Academic Library                  1033                  15
    OFFILE#1                 Records Center                    1033                  15
    EHS#1                    Team Site – SharePoint Onl…     1033                  15
    BICenterSite#0           Business Intelligence Center      1033                  15
    SRCHCEN#0                Enterprise Search Center          1033                  15
    BLANKINTERNETCONTAINER#0 Publishing Portal                 1033                  15
    ENTERWIKI#0              Enterprise Wiki                   1033                  15
    PROJECTSITE#0            Project Site                      1033                  15
    COMMUNITY#0              Community Site                    1033                  15
    COMMUNITYPORTAL#0        Community Portal                  1033                  15
    SRCHCENTERLITE#0         Basic Search Center               1033                  15
    visprus#0                Visio Process Repository          1033                  15

    Give Access to a Site Collection

    Once your Site Collection has been created you may wish to grant users access to the Site Collection. First you may want to create a new SharePoint group (if an appropriate one is not already present) and then you may want to add users to that group (or an existing one). To accomplish these tasks you use the New-SPOSiteGroup cmdlet and the Add-SPOUser cmdlet, respectively.

    Looking at the New-SPOSiteGroup cmdlet you can see that it takes only three parameters, the name of the group to create, the permissions to add to the group, and the Site Collection within which to create the group:

    New-SPOSiteGroup [-Site] <SpoSitePipeBind> [-Group] <string> [-PermissionLevels] <string[]> [<CommonParameters>]

    In the following example I’m creating a new group named “Designers” and giving it the “Design” permission:

    $site = Get-SPOSite https://contoso.sharepoint.com/sites/Test -Detailed

    $group = New-SPOSiteGroup -Site $site -Group “Designers” -PermissionLevels “Design“

    (Note that I’m seeing the Site Collection to a variable just to keep the commands a little shorter, you could just as easily provide the string URL directly).

    Once the group is created we can then use the Add-SPOUser cmdlet to add a user to the group. Like the New-SPOSiteGroup cmdlet this cmdlet takes three parameters:

    Add-SPOUser [-Site] <SpoSitePipeBind> [-LoginName] <string> [-Group] <string> [<CommonParameters>]

    In the following example I’m adding a new user to the previously created group:

    Add-SPOUser -Site $site -Group $group.LoginName -LoginName “tessa@contoso.com”

    Delete and Recover a Site Collection

    If you’ve created a Site Collection that you now wish to delete you can easily accomplish this by using the Remove-SPOSite cmdlet. When this cmdlet finishes the Site Collection will have been moved to the recycle bin and not actually deleted.

    If you wish to permanently delete the Site Collection (and thus remove it from the recycle bin) then you must use the Remove-SPODeletedSite cmdlet. So to do a permanent delete it’s actually a two step process, as shown in the example below where I first move the “Test” Site Collection to the recycle bin and then delete it from the recycle bin:

    Remove-SPOSite http://contoso.sharepoint.com/sites/test&#8221; -Confirm:$false

    Remove-SPODeletedSite -Identity http://contoso.sharepoint.com/sites/test&#8221; -Confirm:$false

     

    If you decide that you’d actually like to restore the Site Collection from the recycle bin you can simply use the Restore-SPODeletedSite cmdlet:

    Restore-SPODeletedSite http://contoso.sharepoint.com/sites/test

    Both the Remove-SPOSite and the Restore-SPODeletedSite cmdlets accept a –NoWait parameter which you can provide to tell the cmdlet to return immediately.

    Parting Thoughts

    There are obviously many other cmdlets available to explore (per the previous list), however, I hope that in the simple samples shown in this article you will find that working with the cmdlets is quite easy and fairly intuitive.

    The key thing to remember is that you are working in a stateless environment so changes to an object such as SPOSite will not affect the actual Site Collection in any way and cmdlets like the Set-SPOSite cmdlet will not honor changes made to the properties as it will use nothing more than the URL property to know which Site Collection you are updating.

    Though the existence of these cmdlets is definitely a good start and absolutely better than nothing, I have to say that I’m extraordinarily displeased with the number of available cmdlets and with how the module was implemented.

    My biggest gripe is that the module is not extensible in any way so if I wish to add cmdlets for the management of SPWeb objects or SPList objects I’d have to create a whole new framework which would require an additional login as I wouldn’t be able to leverage the context object created by Connect-SPOService cmdlet.

    This results in a severely limiting product that prevents community and ISV generated solutions from “fitting in” to the existing model. Perhaps one day I’ll create my own set of cmdlets to show Microsoft how it should have been done…perhaps one day I’ll have time for such frivolities :) .

     

    How To : Use Powershell Scripts in Office 365 through the SharePoint CSOM

    When we first started to work with Office 365, I remember being quite concerned at the lack of PowerShell cmdlets – basically all the commands we’re used to using do not exist there. Here’s a gratuitous graph to illustrate the point:

    image

    So yes, nearly 800 PowerShell commands in SP2013 (up from around 530 in SP2010) down to a measly 30 in SharePoint Online. And those 30 mainly cover basic operations with sites, users and permissions – no scripting of, say, Managed Metadata, user profiles, search and so on. It’s true to say that some of these things are now available down at site-collection scope (needed, of course, when you don’t have a true “Central Admin” site but there are still “tenant-level” settings that you want to use script for rather than make manual changes through the UI.

    So what’s a poor developer/administrator to do?

    The answer is to write PowerShell as you always did, but embed CSOM code in there. More examples later, but here’s a small illustration:

    # get the site collection scoped Features collections (e.g. to activate one) – not showing how to obtain $clientContext here..
    $siteFeatures = $clientContext.Site.Features
    $clientContext.Load($siteFeatures)
    $clientContext.ExecuteQuery()

    So we’re using the .NET CSOM, but instead of C# we are using PowerShell’s ability to call any .NET object (indeed, nearly every script will use PowerShell’s New-Objectcommand). All the things we came to love about PowerShell are back on the table:

    • Scripts can be easily amended, no need to recompile (or open Visual Studio)
    • We can debug with PowerGui or PowerShell ISE
    • We can leverage other things PowerShell is good at e.g. easily reading from XML files, using other PowerShell modules and other APIs (including .NET) etc.

    Of course, we can only perform operations where the method exists in the .NET CSOM – that’s the boundary of what we can do.

    Getting started

    Step 1 – understand the landscape

    The first thing to understand is that there are actually 3 different approaches for scripting against Office 365/SharePoint Online, depending on what you need to do. It might just be me, but I think that when you start it’s easy to get confused between them, or not fully appreciate that they all exist. The 3 approaches I’m thinking of are:

    • SharePoint Online cmdlets
    • MSOL cmdlets
    • PowerShell + CSOM

    This post focuses on the last flavor. I also wrote a short companion post about the overall landscape and with some details/examples on the other flavors, at Using SharePoint Online and MSOL cmdlets in PowerShell with Office 365

    Step 2 – prepare the machine you will run scripts against SharePoint Online

    Option 1 – if you will NOT run scripts from a SP2013 box (e.g. a SP2013 VM):

    You need to obtain the SharePoint DLLs which comprise the .NET CSOM, and copy them to a folder on your machine – your scripts will reference these DLLs.

    1. Go to any SharePoint 2013 server, and copy any DLL
    2. which starts with Microsoft.SharePoint.Client*.dll from the C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI folder.
    3. Store them in a folder on your machine e.g. C:\Lib – make a note of this location.

    CSOM DLLs

    Option 2 – if you WILL run scripts from a SP2013 box (e.g. a SP2013 VM):

    In this case, there is no need to copy the DLLs – your scripts will reference them in the original SharePoint install location (C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI).

    The top of your script – referencing DLLs and authentication

    Each .ps1 file which calls the SharePoint CSOM needs to deal with two things before you can use the API – loading the CSOM types, and authenticating/obtaining a ClientContext object. So, you’ll need this at the top of your script:

    ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
    # replace these details (also consider using Get-Credential to enter password securely as script runs)..
    $username = “SomeUser@SomeOrg.onmicrosoft.com”
    $password = “SomePassword”
    $securePassword = ConvertTo-SecureString $Password -AsPlainText -Force
    # the path here may need to change if you used e.g. C:\Lib..
    Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll”
    Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll”
    # note that you might need some other references (depending on what your script does) for example:
    Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Taxonomy.dll”
    # connect/authenticate to SharePoint Online and get ClientContext object..
    $clientContext = New-Object Microsoft.SharePoint.Client.ClientContext($url)
    $credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $securePassword)
    $clientContext.Credentials = $credentials
    if (!$clientContext.ServerObjectIsNull.Value)
    {
    Write-Host “Connected to SharePoint Online site: ‘$Url'” -ForegroundColor Green
    }
    view rawTopOfScript_PSCSOM.ps1 hosted with ❤ by GitHub

    In the scripts which follow, we’ll include this “top of script” stuff by dot-sourcing TopOfScript.ps1 in every script below – you could follow a similar approach (perhaps with a different name!) or simply paste that stuff into every script you create. If you enter a valid set of credentials and URL, running the script above should see you ready to rumble:

    PS CSOM got context

    Script examples

    Activating a Feature in SPO

    Something you might want to do at some point is enable or disable a Feature using script. The script below, like the others that follow it, all reference my TopOfScript.ps1 script above:

    ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
    . .\TopOfScript.ps1
    [bool]$enable = $true
    [bool]$force = $false
    # using the Minimal Download Strategy Feature here..
    $FeatureId = [GUID](“87294C72-F260-42f3-A41B-981A2FFCE37A”)
    # ..and working with the web-scoped Features – use $clientContext.Site.Features for site-scoped Features
    $webFeatures = $clientContext.Web.Features
    $clientContext.Load($webFeatures)
    $clientContext.ExecuteQuery()
    if ($enable)
    {
    $webfeatures.Add($featureId, $force, [Microsoft.SharePoint.Client.FeatureDefinitionScope]::None)
    }
    else
    {
    $webfeatures.Remove($featureId, $force)
    }
    try
    {
    $clientContext.ExecuteQuery()
    if ($enable)
    {
    Write-Host “Feature ‘$FeatureId’ successfully activated..”
    }
    else
    {
    Write-Host “Feature ‘$FeatureId’ successfully deactivated..”
    }
    }
    catch
    {
    Write-Error “An error occurred whilst activating/deactivating the Feature. Error detail: $($_)
    }
    view rawActivateOrDeactivateFeature_PSCSOM.ps1 hosted with ❤ by GitHub

    PS CSOM activate feature

    Enable side-loading (for app deployment)

    Along very similar lines (because it also involves activating a Feature), is the idea of enabling “side-loading” on a site. By default, if you’re developing a SharePoint app it can only be F5 deployed from Visual Studio to a site created from the Developer Site template, but by enabling “side-loading” you can do it on (say) a team site too. Since the Feature isn’t visible (in the UI), you’ll need a script like this:

    ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
    . .\TopOfScript.ps1
    [bool]$enable = $true
    [bool]$force = $false
    # this is the side-loading Feature ID..
    $FeatureId = [GUID](“AE3A1339-61F5-4f8f-81A7-ABD2DA956A7D”)
    # ..and this one is site-scoped, so using $clientContext.Site.Features..
    $siteFeatures = $clientContext.Site.Features
    $clientContext.Load($siteFeatures)
    $clientContext.ExecuteQuery()
    if ($enable)
    {
    $siteFeatures.Add($featureId, $force, [Microsoft.SharePoint.Client.FeatureDefinitionScope]::None)
    }
    else
    {
    $siteFeatures.Remove($featureId, $force)
    }
    try
    {
    $clientContext.ExecuteQuery()
    if ($enable)
    {
    Write-Host “Feature ‘$FeatureId’ successfully activated..”
    }
    else
    {
    Write-Host “Feature ‘$FeatureId’ successfully deactivated..”
    }
    }
    catch
    {
    Write-Error “An error occurred whilst activating/deactivating the Feature. Error detail: $($_)
    }
    view rawEnableSideLoading_PSCSOM.ps1 hosted with ❤ by GitHub

    PS CSOM enable side loading

    Iterating webs

    Sometimes you might want to loop through all the webs in a site collection, or underneath a particular web:

    ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
    1234567891011121314151617181920
    . .\TopOfScript.ps1
    $rootWeb = $clientContext.Web
    $childWebs = $rootWeb.Webs
    $clientContext.Load($rootWeb)
    $clientContext.Load($childWebs)
    $clientContext.ExecuteQuery()
    function processWeb($web)
    {
    $lists = $web.Lists
    $clientContext.Load($web)
    $clientContext.ExecuteQuery()
    Write-Host “Web URL is” $web.Url
    }
    foreach ($childWeb in $childWebs)
    {
    processWeb($childWeb)
    }
    view rawIterateWebs.ps1 hosted with ❤ by GitHub

    PS CSOM iterate webs

    (Worth noting that you also see SharePoint-hosted app webs also in the image above, since these are just subwebs (albeit ones which get accessed on the app domain URL rather than the actual host site’s web application URL).

    Iterating webs, then lists, and updating a property on each list

    Or how about extending the sample above to not only iterate webs, but also the lists in each – the property I’m updating on each list is the EnableVersioning property, but you easily use any other property or method in the same way:

    ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
    . .\TopOfScript.ps1
    $enableVersioning = $true
    $rootWeb = $clientContext.Web
    $childWebs = $rootWeb.Webs
    $clientContext.Load($rootWeb)
    $clientContext.Load($childWebs)
    $clientContext.ExecuteQuery()
    function processWeb($web)
    {
    $lists = $web.Lists
    $clientContext.Load($web)
    $clientContext.Load($lists)
    $clientContext.ExecuteQuery()
    Write-Host “Processing web with URL “ $web.Url
    foreach ($list in $web.Lists)
    {
    Write-Host “– “ $list.Title
    # leave the “Master Page Gallery” and “Site Pages” lists alone, since these have versioning enabled by default..
    if ($list.Title -ne “Master Page Gallery” -and $list.Title -ne “Site Pages”)
    {
    Write-Host “—- Versioning enabled: “ $list.EnableVersioning
    $list.EnableVersioning = $enableVersioning
    $list.Update()
    $clientContext.Load($list)
    $clientContext.ExecuteQuery()
    Write-Host “—- Versioning now enabled: “ $list.EnableVersioning
    }
    }
    }
    foreach ($childWeb in $childWebs)
    {
    processWeb($childWeb)
    }
    view rawIterateWebsAndListsEnableVersioning.ps1 hosted with ❤ by GitHub

    PS CSOM iterate lists enable versioning

    Import search schema XML

    In SharePoint 2013 and Office 365, many aspects of search configuration (such as Managed Properties and Crawled Properties, Query Rules, Result Sources and Result Types) can be exported and importing between environments as an XML file. The sample below shows the import operation handled with PS + CSOM:

    ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
    . .\TopOfScript.ps1
    # need some extra types bringing in for this script..
    Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Search.dll”
    # TODO: replace this path with yours..
    $pathToSearchSchemaXmlFile = “C:\COB\Cloud\PS_CSOM\XML\COB_TenantSearchConfiguration.xml”
    # we can work with search config at the tenancy or site collection level:
    #$configScope = “SPSiteSubscription”
    $configScope = “SPSite”
    $searchConfigurationPortability = New-Object Microsoft.SharePoint.Client.Search.Portability.SearchConfigurationPortability($clientContext)
    $owner = New-Object Microsoft.SharePoint.Client.Search.Administration.SearchObjectOwner($clientContext, $configScope)
    [xml]$searchConfigXml = Get-Content $pathToSearchSchemaXmlFile
    $searchConfigurationPortability.ImportSearchConfiguration($owner, $searchConfigXml.OuterXml)
    $clientContext.ExecuteQuery()
    Write-Host “Search configuration imported” -ForegroundColor Green
    view rawImportSearchSchema.ps1 hosted with ❤ by GitHub

    PS CSOM import search schema

    Summary

    As you can hopefully see, there’s lots you can accomplish with the PowerShell and CSOM combination. Anything that can be done with CSOM API can be wrapped into a script, and you can build up a library of useful PowerShell snippets just like the old days. There are some interesting things that you CANNOT do with CSOM (such as automating the process of uploading/deploying a sandboxed WSP to Office 365), but there ARE approaches for solving even these problems, and I’ll most likely cover this (and our experiences) in future posts.

    A final idea on the PowerShell + CSOM front is the idea that you can have “hybrid” scripts which can deal with both SharePoint Online and on-premises SharePoint. For example, on my current project everything we build must be deployable to both SPO and on-premises, and our scripts take a “DeploymentTarget” parameter where the values can be “Online” or “OnPremises”. There are some differences (i.e. branching) in the scripts, but for many operations the same commands can be run.

    Client-side PowerShell for SharePoint Online and Office 365

    SharePoint PowerShell is a PowerShell API for SharePoint 2010, 2013 and Online. Very usefull for Office 365 and private clouds where you don’t have access to the physical server.

    Image

    The API uses the Managed .NET Client-Side Object Model (CSOM) of SharePoint 2013. It’s a library of PowerShell Scripts and in it’s core it talks to the CSOM dll’s.

    Examples :

    Import-Module .\spps.psm1 
    
    Initialize-SPPS -siteURL "https://example.sharepoint.com/" -online $true -username "sitecollectionadmin@example.onmicrosoft.com" -password "password"
    Example
    # Include SPPS
    Import-Module .\spps.psm1 
    
    # Setup SPPS
    Initialize-SPPS -siteURL "https://example.sharepoint.com/" -online $true -username "sitecollectionadmin@example.onmicrosoft.com" -password "password"
    
    # Activate Publishing Site Feature
    Activate-Feature -featureId "f6924d36-2fa8-4f0b-b16d-06b7250180fa" -force $false -featureDefinitionScope "Site"
    
    #Activate Publishing Web Feature
    Activate-Feature -featureId "94c94ca6-b32f-4da9-a9e3-1f3d343d7ecb" -force $false -featureDefinitionScope "Web"

    Features

    • Site Collection
      • Test connection
    • Site
      • Manage subsites
      • Manage permissions
    • Lists and Document Libraries
      • Create list and document library
      • Manage fields
      • Manage list and list item permissions
      • Upload files to document library (including folders)
      • Add items to a list with CSV
      • Add and remove list items
      • File check-in and check-out
    • Master pages
      • Set system master page
      • Set custom master page
    • Features
      • Activate features
    • Web Parts
      • Add Web Parts to page
    • Users and Groups
      • Set site permissions
      • Set list permissions
      • Set document permissions
      • Create SharePoint groups
      • Add users and groups to SharePoint groups
    • Solutions
      • Upload sandboxed solutions
      • Activate sandboxed solutions

      Contact me at tomas.floyd@outlook.com for this and more Azure,SharePoint & Office 365 Tools, Web Parts and Apps

    Create a new Search Service Application in SharePoint 2013 using PowerShell

    The search architecture in SharePoint 2013 has changed quite a bit when compared to SharePoint 2010. In fact the Search Service in SharePoint 2013 is completely overhauled. It is a combination of FAST Search and SharePoint Search components.

    apxvsdik

    As you can see the query and crawl topologies are merged into a single topology, simply called “Search topology”. Provisioning of the search service application creates 4 databases:

    • SP2013_Enterprise_Search – This is a search administration database. It contains configuration and topology information
    • SP2013_Enterprise_Search_AnalyticsReportingStore – This database stores the result of usage analysis
    • SP2013_Enterprise_Search_CrawlStore – The crawl database contains detailed tracking and historical information about crawled items
    • SP2013_Enterprise_Search_LinksStore – Stores the information extracted by the content processing component and also stores click-through information

    # Create a new Search Service Application in SharePoint 2013

    Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue

    Settings    $IndexLocation = “C:\Data\Search15Index” #Location must be empty, will be deleted during the process!     $SearchAppPoolName = “Search App Pool”     $SearchAppPoolAccountName = “Contoso\administrator”     $SearchServerName = (Get-ChildItem env:computername).value     $SearchServiceName = “Search15”     $SearchServiceProxyName = “Search15 Proxy”     $DatabaseName = “Search15_ADminDB”     Write-Host -ForegroundColor Yellow “Checking if Search Application Pool exists”     $SPAppPool = Get-SPServiceApplicationPool -Identity $SearchAppPoolName -ErrorAction SilentlyContinue

    if (!$SPAppPool)    {         Write-Host -ForegroundColor Green “Creating Search Application Pool”         $spAppPool = New-SPServiceApplicationPool -Name $SearchAppPoolName -Account $SearchAppPoolAccountName -Verbose     }

    Start Services search service instance    Write-host “Start Search Service instances….”     Start-SPEnterpriseSearchServiceInstance $SearchServerName -ErrorAction SilentlyContinue     Start-SPEnterpriseSearchQueryAndSiteSettingsServiceInstance $SearchServerName -ErrorAction SilentlyContinue

    Write-Host -ForegroundColor Yellow “Checking if Search Service Application exists”    $ServiceApplication = Get-SPEnterpriseSearchServiceApplication -Identity $SearchServiceName -ErrorAction SilentlyContinue

    if (!$ServiceApplication)    {         Write-Host -ForegroundColor Green “Creating Search Service Application”         $ServiceApplication = New-SPEnterpriseSearchServiceApplication -Partitioned -Name $SearchServiceName -ApplicationPool $spAppPool.Name  -DatabaseName $DatabaseName     }

    Write-Host -ForegroundColor Yellow “Checking if Search Service Application Proxy exists”    $Proxy = Get-SPEnterpriseSearchServiceApplicationProxy -Identity $SearchServiceProxyName -ErrorAction SilentlyContinue

    if (!$Proxy)    {         Write-Host -ForegroundColor Green “Creating Search Service Application Proxy”         New-SPEnterpriseSearchServiceApplicationProxy -Partitioned -Name $SearchServiceProxyName -SearchApplication $ServiceApplication     }

    $ServiceApplication.ActiveTopology     Write-Host $ServiceApplication.ActiveTopology

    Clone the default Topology (which is empty) and create a new one and then activate it    Write-Host “Configuring Search Component Topology….”     $clone = $ServiceApplication.ActiveTopology.Clone()     $SSI = Get-SPEnterpriseSearchServiceInstance -local     New-SPEnterpriseSearchAdminComponent –SearchTopology $clone -SearchServiceInstance $SSI     New-SPEnterpriseSearchContentProcessingComponent –SearchTopology $clone -SearchServiceInstance $SSI     New-SPEnterpriseSearchAnalyticsProcessingComponent –SearchTopology $clone -SearchServiceInstance $SSI     New-SPEnterpriseSearchCrawlComponent –SearchTopology $clone -SearchServiceInstance $SSI

    Remove-Item -Recurse -Force -LiteralPath $IndexLocation -ErrorAction SilentlyContinue    mkdir -Path $IndexLocation -Force

    New-SPEnterpriseSearchIndexComponent –SearchTopology $clone -SearchServiceInstance $SSI -RootDirectory $IndexLocation    New-SPEnterpriseSearchQueryProcessingComponent –SearchTopology $clone -SearchServiceInstance $SSI     $clone.Activate()

    Write-host “Your search service application $SearchServiceName is now ready”

    Update

    To configure failover server(s) for Search DBs, use the following PowerShell:

    Thanks to Marcel Jeanneau for sharing this!

    #Admin Database   $ssa = Get-SPEnterpriseSearchServiceApplication “Search Service Application”    Set-SPEnterpriseSearchServiceApplication –Identity $ssa –FailoverDatabaseServer <failoverserveralias\instance>

    #Crawl Database   $CrawlDatabase0 = ([array]($ssa | Get-SPEnterpriseSearchCrawlDatabase))[0]    Set-SPEnterpriseSearchCrawlDatabase -Identity $CrawlDatabase0 -SearchApplication $ssa -FailoverDatabaseServer <failoverserveralias\instance>

    #Links Database   $LinksDatabase0 = ([array]($ssa | Get-SPEnterpriseSearchLinksDatabase))[0]     Set-SPEnterpriseSearchLinksDatabase -Identity $LinksDatabase0 -SearchApplication $ssa -FailoverDatabaseServer <failoverserveralias\instance>

    #Analytics database   $AnalyticsDB = Get-SPDatabase –Identity     $AnalyticsDB.AddFailOverInstance(“failover alias\instance”)    $AnalyticsDB.Update()

    You can always change the default content access account using the following command:

    $password = Read-Host –AsSecureString**********Set-SPEnterpriseSearchService -id “SSA name” –DefaultContentAccessAccountName Contoso\account –DefaultContentAccessAccountPassword $password

    Look out for my Powershell Web Part and Google Analytics Web Part and App that is under Development and available soon for purchase!!

    Image

    Troubleshooting WCF Services during Runtime with WMI

    One of the coolest features of WCF when it comes to troubleshooting is the WCF message logging and tracing feature. With message logging you can write all the messages your WCF service receives and returns to a log file. With tracing, you can log any trace message emitted by the WCF infrastructure, as well as traces emitted from your service code.

    The issue with message logs and tracing is that you usually turn them off in production, or at the very least, reduce the amount of data they output, mainly to conserve disk space and reduce the latency involved in writing the logs to the disk. Usually this isn’t a problem, until you find yourself in the need to turn them back on, for example when you detect an issue with your service, and you need the log files to track the origin of the problem.

    Unfortunately, changing the configuration of your service requires resetting it, which might result in loss of data, your service becoming unavailable for a couple of seconds, and possibly for the problem to be resolved on its own, if the reason for the strange behavior was due to a faulty state of the service.

    There is however a way to change the logging configuration of the service during runtime, without needing to restart the service with the help of the Windows Management Instrumentation (WMI) environment.

    In short, WMI provides you with a way to view information about running services in your network. You can view a service’s process information, service information, endpoint configuration, and even change some of the service’s configuration in runtime, without needing to restart the service.

    Little has been written about the WMI support in WCF. The basics are documented on MSDN, and contain instructions on what you need to set in your configuration to make the WMI provider available. The MSDN article also provides a link to download the WMI Administrative Tools which you can use to manage services with WMI. However that tool requires some work on your end before getting you to the configuration you need to change, in addition to it requiring you to run IE as an administrator with backwards compatibility set to IE 9, which makes the entire process a bit tedious. Instead, I found it easier to use PowerShell to write six lines of script which do the job.

    The following steps demonstrate how to create a WCF service with minimal message logging and tracing configuration, start it, test it, and then use PowerShell with WMI to change the logging configuration in runtime.

    1. Open Visual Studio 2012 and create a new project using the WCF Service Application template.

    After the project is created, the service code is shown. Notice that in the GetDataUsingDataContract method, an exception is thrown when the composite parameter is null.

    2. In Solution Explorer, right-click the Web.config file and then click Edit WCF Configuration.

    3.In the Service Configuration Editor window, click Diagnostics, enable the WMI Provider, MessageLogging and Tracing.

    By default, enabling message logging will enable logging of all the message from the transport layer and any malformed message. Enabling tracing will log all activities and any trace message with severity Warning and up (Warning, Error, and Critical). Although those settings are useful during development, in production we probably want to change them so we will get smaller log files with only the most relevant information.

    4. Under MessageLogging, click the link next to Log Level, uncheck Transport Messages, and then click OK.

    The above setting will only log malformed messages, which are messages that do not fit any of the service’s endpoints, and are therefore rejected by the service.

    5. Under Tracing, click the link next to Trace Level, uncheck Activity Tracing, and then click OK.

    The above setting will prevent every operation from being logged, unless those that output a trace message of warning and up. You can read more about the different types of trace messages on MSDN. http://msdn.microsoft.com/en-us/library/ms733025(v=vs.110).aspx

    By default, message logging only logs the headers of a message. To also log the body of a message, we need to change the message logging configuration. Unfortunately, we cannot change that setting in runtime with WMI, so we will set it now.

    6. In the configuration tree, expand Diagnostics, click Message Logging, and set the LogEntireMessage property to True.

    7. Press Ctrl+S to save the changes, close the configuration editor window, and return to Visual Studio.

    The trace listener we are using buffers the output and will only write to the log files when the buffer is full. Since this is a demonstration, we would like to see the output immediately, and therefore we need to change this behavior.

    8. In Solution Explorer, open the Web.config file, locate the <system.diagnostics> section, and place the following xml before the </system.diagnostics> closing tag: <trace autoflush=”true”/>

    Now let us run the service, test it, and check the created log files.

    9. In Solution Explorer, click Service1.svc, and then press Ctrl+F5 to start the WCF Test Client without debugging.

    10. In the WCF Test Client window, double-click the GetDataUsingDataContract node, and then click Invoke. Repeat this step 2-3 times.

    Note: If a Security Warning dialog appears, click OK.

    11. In the Request area, open the drop-down next to the composite parameter, and set it to (null).

    12. Click Invoke and wait for the exception to show. Notice that the exception is general (“The server was unable to process the request due to an internal error.”) and does not provide any meaningful information about the true exception. Click Close to close the dialog.

    Let us now check the content of the two log files. We should be able to see the traced exception, but the message wouldn’t have been logged.

    13. Keep the WCF Test Client tool running and return to Visual Studio. Right-click the project and then click Open Folder in File Explorer.

    14. n the File Explorer window, double-click the web_tracelog.svclog file. The file will open in the Service Trace Viewer tool.

    15. n the Service Trace Viewer tool, click the 000000000000 activity in red, and then click the row starting with “Handling an exception”. In the Formatted tab, scroll down to view the exception information.

    As you can see in the above screenshot, the trace file contains the entire exception information, including the message, and the stack trace.

    Note: The configuration evaluation warning message which appears first on the list means that the service we are hosting does not have any specific configuration, and therefore is using a set of default endpoints. The two exceptions that follow are ones thrown by WCF after receiving two requests that did not match any endpoint. Those requests originated from the WCF Test Client tool when it attempted to locate the service’s metadata.

    Next, we want to verify no message was logged for the above argument exception.

    16. Return to the File Explorer window, select the web_messages.svclog file, and drag it to the Service Trace Viewer tool. Drop the file anywhere in the tool.

    There are now two new rows for the malformed messages sent by the WCF Test Client metadata fetching. There is no logged message for the faulted service operation.

    Imagine this is the state you now have in your production environment. You have a trace file that shows the service is experiencing problems, but you only see the exception. To properly debug such issues we need more information about the request itself, and any other information which might have been traced while processing the request.

    To get all that information, we need to turn on activity tracing and include messages from the transport level in our logs.

    If we open the Web.config file and change it manually, this would cause the Web application to restart, as discussed before. So instead, we will use WMI to change the configuration settings in runtime.

    17. Keep the Service Trace Viewer tool open, and open a PowerShell window as an Administrator.

    18. To get the WMI object for the service, type the following commands in the PowerShell window and press Enter: $wmiService = Get-WmiObject Service -filter “Name=’Service1′” -Namespace “root\servicemodel” -ComputerName localhost $processId = $wmiService.ProcessId $wmiAppDomain = Get-WmiObject AppDomainInfo -filter “ProcessId=$processId” -Namespace “root\servicemodel” -ComputerName localhost

    Note: The above script assumes the name of the service is ‘Service1’. If you have changed the name of the service class, change the script and run it again. If you want to change the configuration of a remote service, replace the localhost value in the ComputeName parameter with your server name.

    19. To turn on transport layer message logging, type the following command and press Enter: $wmiAppDomain.LogMessagesAtTransportLevel = $true

    20. To turn on activity tracing, type the following command and press Enter: $wmiAppDomain.TraceLevel = “Warning, ActivityTracing”

    21. Lastly, to save the changes you made to the service configuration, type the following command and press Enter: $wmiAppDomain.Put()

    22. Switch back to the WCF Test Client. In the Request area, open the drop-down next to the composite parameter, and set it to a new CompositeType object. Click Invoke 2-3 times to generate several successful calls to the service.

    23. In the Request area, open the drop-down next to the composite parameter, and set it to (null).

    24. Click Invoke and wait for the exception to show. Click Close to close the dialog.

    25. Switch back to the Service Trace Viewer tool and press F5 to refresh the activities list.

    You will notice that now there is a separate set of logs for each request handled by the service. You can read more on how to use the Service Trace Viewer tool to view traces and troubleshoot WCF services on MSDN. http://msdn.microsoft.com/en-us/library/aa751795(v=vs.110).aspx

    26. From the activity list, select the last row in red that starts with “Process action”.

    You will notice that now you can see the request message, the exception thrown in the service operation, and the response message, all in the same place. In addition, the set of traces is shown for each activity separately, making it easy to identify a specific request and its related traces.

    27. On the right pane, select the first “Message Log Trace” row, click the Message tab, and observe the body of the message.

    Now that we have the logged messages, we can select the request message and try to figure out the cause of the exception. As you can see, the composite parameter is empty (nil).

    If this was a production environment, you would probably want to restore the message logging and tracing to its original settings at this point. To do this, return to the PowerShell window, and re-run the command from before with their previous values:

    $wmiAppDomain.LogMessagesAtTransportLevel = $false

    $wmiAppDomain.TraceLevel = “Warning”

    $wmiAppDomain.Put()

    Before we conclude, now that your service is manageable through WMI, you can use other commands to get information about the service and its components. For example, the following command will return the service endpoints’ information: Get-WmiObject Endpoint -filter “ProcessId=$processId” -Namespace “root\servicemodel” -ComputerName localhost

    Looking for the SharePoint Developer Dashboard? Look no more!

    This tool is useful for measuring the behaviour and performance of individual pages.

    Can be used to monitor page load and performance

    It has three states:

    • On,
    • Off,
    • On demand.

    Activating the Developer Dashboard

    Developer Dashboard is a utility that is available in all SharePoint 2010 versions, and can be enabled in a few different ways:

    • PowerShell
    • STSADM.exe
    • SharePoint Object Model (API’s)

    Activate the Developer Dashboard using STSADM.EXE

    The Developer Dashboard state can only be toggled with a STSADM command

    • ON:

    STSADM –o setproperty –pn developer-dashboard –pv on

    • OFF:

    STSADM –o setproperty –pn developer-dashboard –pv off

    • ON DEMAND:

    STSADM –o setproperty –pn developer-dashboard –pv ondemand

    Activate the Developer Dashboard using PowerShell:

    $DevDashboardSettings = [Microsoft.SharePoint.Administration.SPWebService]::

    ContentService.DeveloperDashboardSettings;

    $DevDashboardSettings.DisplayLevel = ‘OnDemand’;

    $DevDashboardSettings.RequiredPermissions =’EmptyMask’;

    $DevDashboardSettings.TraceEnabled = $true;

    $DevDashboardSettings.Update()

    Activate the Developer Dashboard using the SharePoint Object Model

    using Microsoft.SharePoint.Administration;

    SPWebService svc = SPContext.Current.Site.WebApplication.WebService; svc.DeveloperDashboardSettings.DisplayLevel=SPDeveloperDashboardLevel.Off; svc.DeveloperDashboardSettngs.Update();

    Where is it?

    Picture2

    You can see the Developer Dashboard button In the top right.

    Developer Dashboard 3 Border Colors

    • it has a Green border, that generally means it’s loading quick enough not to be a real problem.
    • It can also render Yellow, which indicates that there’s a slight delay and
    • it could render a Red border which would mean you definitely need to look into it immediately!

    Picture3

    You see this Developer Dashboard has a yellow border color.

    AutoSPInstallerGUI – Great new tool with GUI for automated SharePoint Installation

    http://autospinstallergui.codeplex.com/

    Project Description
    Automated SharePoint 2010/2013 PowerShell-based installation script – Now with user-friendly GUI!!

    Image

    Introducing AutoSPInstaller v3 with numerous enhancements including:

    • Granular SQL server assignment and aliasing for (almost) every service/web app (for control freaks)
    • Centralized, remote installation to all farm servers
    • Ability to specify any XML input file, by passing it as an argument to AutoSPInstallerLaunch.bat
    • Several tweaks & fixes

    AutoSPInstaller now works reliably with SharePoint 2013, and of course SharePoint 2010 including Service Pack 1 (and SP2) for SP2010! It takes advantage of some of the cmdlet updates in SharePoint 2010 SP1, while remaining backward-compatible with non-SP1 deployments.

    Newer versions often include updates to the input file XML schema, so make sure you compare any of your existing XML files to the newest AutoSPInstallerInput.XML. See below for highlights of changes in v3.0.x.

    This project consists of PowerShell scripts, an XML input file, and a standard windows batch file (to kick off the process) which together provide a quick and near-unattended installation and initial configuration (Service Apps, My Sites) of Microsoft SharePoint Server 2010/2013. Works on Windows 2008 (though I hardly test on that OS these days), 2008 R2 and Windows 2012 (x64 only of course).

    Perfect for repeated Virtual Machine-based installs/tear-downs, etc., but also great for production installs where you want to guarantee consistency and minimize data entry glitches. The immediate value is for installing and configuring the first/only server in a farm, but also supports using either server-specific input files or a single, all-encompassing input file for running the script on all farm servers (with parameters – e.g. for the service apps – set according to your desired topology).

    “But doesn’t SharePoint 2010 have a nice wizard now that does all this for me??” – Yes, and it’s a huge improvement over what was available in MOSS 2007. However if you’ve ever seen the ‘DBA nightmare’ left behind on your SQL server after the Farm Configuration Wizard has completed (GUID’ed databases with inconsistent naming, etc.):

    …then you’ll see the value in having consistently-named but automatically-created databases:

    DatabaseList-Clean

    The scripts (Franken-scripts, really…) leverage previously-available resources (as PowerShell has now taken its place as the automation platform for SharePoint) such as: Zach Rosenfield’s blog, Jos Verlinde’s script for creating a Farm, Gary Lapointe’s Enterprise Search script functions and other miscellaneous tidbits in the wild.

    The scripted process will:

    • Re-launch itself in an elevated process to deal with User Access Control
    • Check whether the target server is running Windows 2008 or 2008 R2
    • Prompt you to enter all most (in progress) service accounts, passwords and the farm passphrase, unless you opt to just specify them in the AutoSPInstallerInput.xml
    • Validate connectivity and permissions to your SQL instance/alias
    • Validate the farm passphrase (for complexity), as well as the service account/password combinations specified in the input XML file
    • Automatically download and install platform-specific pre-requisites (e.g. IIS, .Net Framework) using the SP2010 Prerequisiteinstaller.exe. You can also pre-download all the prerequisites/hotfixes using this script, then specify <OfflineInstall>true</OfflineInstall> in your AutoSPInstallerInput.xml instead of having Prerequisiteinstaller try to download fixes at script runtime.
    • Optionally disable some unnecessary Windows services, CRL checking and the dreaded IE Enhanced Security Configuration
    • Install the SP2010 binaries using an (optionally, server-specific) config.xml for input
    • Optionally install the Office Web Applications (OWA) binaries using config-OWA.xml for input
    • Create the Farm (Config & Central Admin content databases, Central Admin site, help collections, etc.)
    • Optionally configure and start many SharePoint services and service applications; currently the script can provision:
      • User Profile Service Application
      • User Profile Synchronization Service
      • Metadata Service Application
      • SharePoint Foundation User Code Service
      • State Service Application
      • Usage and Health Service Application
      • PowerPivot Service Application (removed due to complexity/misunderstandings around order of installation etc.)
      • Secure Store Service
      • Enterprise Search Service Application
      • Web Analytics Service Application
      • Outgoing Email
      • Business Data Connectivity Service Application
      • Excel Service Application
      • Access Service Application
      • PerformancePoint Service Application
      • Visio Graphics Service Application
      • Word Automation (Conversion) Service Application
      • The Office Web Applications service apps:
        • PowerPoint Service Application
        • Word Viewing Service Application
        • Excel Service Application (if not already provisioned by virtue of having an Enterprise license)
    • Create the main Portal web app and site collection (will try to provision and/or assign a certificate, too – all you need is an https://-based URL in the input XML)
    • Create/configure your My Sites web app and site collection (will also try to provision and/or assign a certificate if you have an https://-based URL in the Input XML)
    • Configure paths and options for both IIS and SharePoint (ULS) logging according to your preferences
    • (NEW) Discover other target servers in your farm (based on values in AutoSPInstallerInput.xml) and remote into each of them to perform the installation process (if <RemoteInstall> is true)
    • Configure PDF indexing and icon display within SharePoint – effectively resolving what many consider to be a long-standing omission in the product in just a few seconds
    • Optionally install ForeFront Protection 2010 for SharePoint if the binaries are found in the correct path (see below)
    • Launch IE to display Central Administration, Portal and My Sites, and view the results of your hard work (just in time for your return from lunch)
    • Log all activity to a file on the current user’s desktop, and pop open the log file for review when finished.

    There are several input parameters to define in the input XML file (which illustrates how much stuff you really have to plan & gather during a regular SharePoint install). However this is a one-time-per-install effort, and the trade-off includes hours saved and better spent elsewhere (see lunch above) and an avoidance of the risks involved (typos, missed settings etc.) during manual installations.

    New in v3:

    • Centralized, remote install of every SharePoint server in your farm using PowerShell remoting
    • Support for parallel binary installations, whether remote install is enabled or not (useful for speeding up multi-server farm installs)
    • Ability to specify a different SQL server for each web application and service application, plus support for creating an alias for each (except Search, currently)
    • Screen output and log both now display the elapsed time to install SharePoint and Office Web App binaries
    • Specify an arbitrary XML input file by passing the XML file name as an argument, or just dragging it onto AutoSPInstallerLaunch.bat

    The full v3 change log can be found within the CodePlex source code changesets for the project. Also, see my post at NothingButSharePoint.com that provides an overview of the new features & fixes.

    New in v2.5:
    The ability to use a single AutoSPInstallerInput.XML file for your entire farm, and simply include the names of servers (comma-delimited) on which you want particular service instances or service applications installed. This works by using the Provision=”” and Start=”” attributes; for example, to provision the managed metadata service on your 2 app servers, you would specify:

    <ManagedMetadataServiceApp Provision=”SPAPPSRV1, SPAPPSRV2″

    Further, the old way of specifying <ManagedMetadataServiceApp Provision=”true”… still works, if you want to continue using a different XML input file for each server.

    See the release notes associated with the original 2.5 changeset here for a more complete list of changes.

    New in v2:

    • MAJOR code and XML schema refactoring effort by Andrew Woodward of 21apps to enable (among other things) easier editing and extending via a custom functions script file.
    • Both the launch batch file and the User Profile Service App creation (as farm account) self-elevate so no more need to right-click, Run as Administrator to successfully run AutoSPInstaller on a server with User Access Control (UAC) enabled!
    • Enterprise Search now properly sets both the service account and the crawl (content access) account
    • Much better multi-server farm support. Services can be tweaked to start on the servers you wish, and service applications won’t be erroneously re-created on subsequent servers, etc.
    • Portal super user and super reader accounts can now be configured per best practices
    • Overall the install experience and results are more in line with community best practices; as always, this is a community-inspired and driven effort!

    <

    p>In addition to the scripts, you should create an installation source (local or shared) containing the entire extracted contents of the SP201x install package. The zip package will by default create most of this folder structure when you extract it. When you’re done, your folder structure should look something like this (Note: updated for v3):

    \SP\AutoSPInstaller\AutoSPInstallerLaunch.bat
    \SP\AutoSPInstaller\AutoSPInstallerInput.xml
    \SP\AutoSPInstaller\AutoSPInstallerMain.ps1
    \SP\AutoSPInstaller\AutoSPInstallerFunctions.ps1
    \SP\AutoSPInstaller\AutoSPInstallerFunctionsCustom.ps1
    \SP\AutoSPInstaller\AutoSPInstallerConfigureRemoteTarget.ps1
    \SP\AutoSPInstaller\config.xml
    \SP\201x\SharePoint\<installation files & folders>
    \SP\201x\SharePoint\PreRequisiteInstallerFiles\
    \SP\201x\SharePoint\Updates\ (extract Service Pack + Cumulative Updates here. NOTE not all updates support slipstreaming!)
    \SP\2013\ProjectServer\ (optional; copy/extract the contents of the Project Server 2013 DVD/ISO here)
    \SP\2013\ProjectServer\Updates (optional, for slipstreaming Service Packs and Public/Cumulative Updates. NOTE not all updates support slipstreaming!)

    \SP\201x\LanguagePacks\xx-xx\
    (optional)
    \SP\201x\LanguagePacks\xx-xx\Updates\ (optional; extract Language Pack Service Pack / Cumulative Updates here)
    OR
    \SP\201x\LanguagePacks\<ServerLanguagePack_XX-XX.exe> (optional)
    \SP\2010\OfficeWebApps\ (optional; only required/supported with SP2010)
    \SP\2010\PDF\ (optional; only required/supported with SP2010)
    \SP\2010\ForeFront\<ForeFront Protection 2010 for SharePoint install files> (optional, only required/supported with SP2010)

    Note that x in the paths above is a 0 or a 3 depending on whether you’re installing SP2010 or SP2013. Spaces in the path to AutoSPInstallerLaunch.bat will cause the script to blow up, so avoid them.

    Creating users in the Authentication system

    Introduction

    In this blog post, I will give you an overview on how to create users in WAP and have them sign in. As you might already know, the Authentication and Authorization processes are separated into their own entities making the stack flexible enough to plug in your own custom Authentication system (eg. AD FS).

    In an Express installation, the authentication is performed at the Admin and Tenant Authentication Sites (where the users enter their credentials) and the authorization is performed at the Service Management API layer. Hence, information about a user needs to be added at both these locations for users to be able to both sign in and get access to their subscriptions.

    This blog will give you information on how to create a user in the Tenant Authentication Site and in the Service Management API layer.

    Note: If you have other Identity Providers plugged into your system, you should create users appropriately in that system apart from creating the user in the Service Management API layer. The section on creating users in the Tenant Authentication site will not apply to you.

    If you have a custom Identity Provider plugged into your WAP stack, you should follow the appropriate steps to add the user into that identity system. This section is applicable only if you use the out-of-the-box Tenant Authentication Site.

    The Tenant Authentication Site uses an out-of-the-box ASP.NET Membership Provider to provide identities. Therefore, you can use the standard ASP.NET Membership APIs to create users in the database. You can find more info on Membership Provider here:  http://msdn.microsoft.com/en-us/library/system.web.security.membershipprovider.aspx

    The information required by the ASP.NET Membership API is specified in the App.Config. This includes specifying the Connection String to the Membership Database and some information that describes the configuration of the ASP.Net Membership Provider. Replace the Connection String in the code below to point to your database and use the appropriate authentication method.

       1: 
       2:   
       3:   
       4: 
       5: 
       6:   
       7:     
       8:       
       9:       applicationName="/" 
      19:            passwordCompatMode="Framework40"
      20:            connectionStringName="WapMembershipDatabase"
      21:            passwordFormat="Hashed" />
      22:     
      23:

    Note: If you have been using the Preview version of the Windows Azure Pack, you have to update your user creation logic to use SHA-256 encryption for your password hashes (specified by the ‘hashAlgorithmType’ value in the App.Config.

    Once this is done you have to call the CreateUser() method to create the user in the Membership Database. Note that I am specifying the email address as the username as expected by the ASP.Net Membership Provider.

       1: Membership.CreateUser(emailAddress, password, emailAddress);

    Creating users in the Service Management API

    This is the second step that enables authorization of the user. Windows Azure Pack provides you with PowerShell cmdlets that facilitate user creation in the API layer. That apart, you can also use the Admin APIClient interfaces that are available as a part of the Sample code found at http://www.microsoft.com/en-us/download/details.aspx?id=41146

    Both the methods involve getting an Identity token for the Administrator, and posting a create user call to the Service Management API layer.

    PowerShell

    You can use the Get-MgmtSvcToken token to get the token from the Windows Authentication Site. If you are using other identity Providers, you must obtain the token appropriately.

       1: $token = Get-MgmtSvcToken -Type 'Windows' -AuthenticationSite 'https://myenvironment:30072' -ClientRealm 'http://azureservices/AdminSite'

    Once you have the identity token, you can use the Add-MgmtSvcUser cmdlet to create a Tenant user.

       1: Add-MgmtSvcUser  -AdminUri 'https://myenvironment:30004' -Token $token -Name 'user@address.com' -email 'user@address.com' -State 'Active'

    Note: If you are using this snippet in a test environment with self-signed certificates, don’t forget to use the –DisableCertificateValidation parameter.you shouldn’t need this in production environments that have certificates from a trusted  CA

    The Admin API Client Sample provides you with an easy interface to perform all the Admin actions for the Windows Azure Pack. As mentioned above, you can download the API Client from the Windows Azure Pack: Service Management API Samples page. The following example will use a method found as a part of the API Client solution. Apart from using the API Client, you can also make a raw Http call directly to the API layer using the reference at How to Create a Windows Azure Pack Tenant User .

    Use the App.Config file to specify the application settings (Alternatively, you can specify these within the main method).

       1: 
       2:   windowsAuthEndpoint" value="https://myenvironment:30072" />
       3:   
       4:   
       5:   
       6:   
       7:

    Read the values from the App.Config and use the snippet below to create a user in the API layer.

    Note: The TokenIssuer.GetWindowsAuthToken() method is present in the API Clients solution that can be downloaded from the Windows Azure Pack: Service Management API Samples page.

       1: string windowsAuthEndpoint = ConfigurationManager.AppSettings["windowsAuthEndpoint"];
       2: string adminDomainName = ConfigurationManager.AppSettings["adminDomainName"];
       3: string adminUsername = ConfigurationManager.AppSettings["adminUsername"];
       4: string adminPassword = ConfigurationManager.AppSettings["adminPassword"];
       5: string adminApiEndpoint = ConfigurationManager.AppSettings["adminApiEndpoint"];
       6: string username;
       7: string password;
       8: var token = TokenIssuer.GetWindowsAuthToken(windowsAuthEndpoint, adminDomainName, adminUsername, adminPassword);
       9: using (var myAdminClient = new AdminManagementClient(new Uri(adminApiEndpoint), token))
      10: {
      11:    var userInfo = new User()
      12:    {
      13:         Name = emailAddress,
      14:         Email = emailAddress,
      15:         State = UserState.Active,
      16:     };
      17:     return myAdminClient.CreateUserAsync(userInfo).Result;
      18: }

    In Summary, Creation of users in WAP involves two steps:

    1. Creating users in the Authentication system – requires username, password and other information required to identify the user
    2. Creating users in the Service Management API layer – requires the username that will be provided by the Authentication system

    You can download the sample at http://go.microsoft.com/fwlink/?LinkId=324039