Category Archives: Active Directory

A Look At : DevOps and DNS – What Every Developer Should Know

Over the years I have had the to work alongside many really smart, switched on people in the development community. I’ve learnt from them many intermediate and experienced programming skills. Generally when it comes understanding the very basis of how the internet functions using DNS, most of these very same experienced developers haven’t got a clue.

I wrote this post to hopefully help pay back some of the awesome karma they  have earned helping me over the years, by teaching them something in return. Lets learn about DNS.

imageDNS is a huge part of the inner workings of the internet. spend a considerable amount of man hours a year ensuring the sites they build are fast and respond well to user interaction by setting up expensive CDN’s, recompressing images, minifying script files and much more – but what a lot of us don’t understand is that DNS server configuration can make a big difference to the speed of your site – hopefully at the end of this post you’ll feel empowered to get the most out of this part of your website’s configuration.

What I will in this post:

Why does DNS matter to you?

Well it’s simple – if you are a developer it matters to you because:

  • You own , and up until now your has taken care of your DNS for you – but you need to know what’s going on in case something bad happens…
  • Maybe the you have allows you to manage your own DNS using a web interface, but you haven’t a clue what you are doing.
  • The DNS that your webhost or ISP offers you is probably not the fastest – if your website grows over time, you probably want to setup your own DNS or manage it through a dedicated service such as DNSMadeEasy, ZoneEdit or DynDNS.

First up: How the internet works (DNS)

If you already know how this works feel to step ahead.

image

In really simple terms, when you enter a URL and hit enter, apart from magical unicorns rendering the requested page in your browser window, the interwebs works kinda like this:

  • You want to visit to a domain name, so your PC first checks its internal DNS cache to see if it’s looked it up recently – if so it uses this record
  • Your PC then asks your DNS server (probably configured by your router or ISP when you first started your PC) for the IP address of the server hosting the domain name you want to visit.
  • Your ISP’s DNS server looks up the root DNS servers for the world to find out who takes care of the DNS configuration for the domain you want to visit.
  • Your ISP’s DNS server then asks this authorative DNS server for the domain name you want’s IP information, fetches it, caches it, and then returns it to your PC.
  • Your browser connects to this IP address and asks for a web page.

There are a number of different scenarios that play a role in special circumstances with the above but I’m not really going to cover everything in this post.

What DNS does do:

  • Converts hostnames to IP addresses.
  • Stores mail delivery information for a domain.
  • Stores miscellaneous information against a domain name (TXT records).

What DNS doesn’t/cannot do

  • Redirects users to a different server/site.
  • Configure which port the client is connecting to (not entirely true; SRV records are used for protocol/port mappings for services).

Tools for the Job

One of the coolest things about the tools you’ll need for this blog post, is where I tell you that independent of which operating system you are using, you almost certainly have everything you need to query and test the DNS configuration of your website installed right now without you even knowing

The Swiss Army Knife of DNS inspection is the command line tool NSLOOKUP. This is installed by default in nearly every OS you’ll ever need it on.

NSLOOKUP on Windows

NSLOOKUP on Unix/Linux/Mac OSX

Another cool thing the usage is the same on most platforms as well.

To run NSLOOKUP simply open a terminal/command prompt and type

nslookup

image

The first thing you’ll notice about the pic above is that the first thing NSLOOKUP tells me upon launch is the current DNS server that it will use for its lookups.

By default NSLOOKUP will use your current machine’s DNS settings for its DNS lookups. This can sometimes give you different results from the rest of the world as your internal DNS at your place of work/ISP may be returning different results so they can route, say your office mail, to the internal mail server IP rather than the external internet/DMZ IP address.

Lets change this to use Google’s global DNS server to get a better global view (what others see when they surf the web outside my network) on our DNS queries, by typing;

server 8.8.8.8

Now if I query this blog’s domain name “diaryofaninja.com.” (ensure you place the additional period on the end of this query to avoid any internal DNS suffixes to be added) I should get back the A record for my domain; (A records are the default query type used by NSLOOKUP – I discuss DNS record types further in this post below).

image

An overview of common DNS record types

Below is a simple overview of all the common types of DNS records and some example scenarios.

All records usually share the following common properties:

Value – this is usually the contents of the records. If it is an A record this is the IP address for that A record

TTL – this is the “Time To Live” in seconds for a DNS record and basically means that DNS Clients of Servers accessing the requested record should not cache the record any longer than this value. If this value is set to 3600 this means to cache the returned record’s value for an hour (these values are usually the reason that IT people talk about DNS changes taking “24-48 hours” as these values are usually set quite high on hostnames that are quite static so that they offer the best performance by being kept in cache.

SOA (Start of Authority) records

SOA records (start of authority records) are the root of your domain’s registration. SOA records are created by your domain name registrar in the parent domain’s DNS servers (in the case of a .com domain the SOA record is created in the DNS servers for the .com root domain. In an SOA record the hostnames or IP addresses of your domain’s DNS servers are stored. These tell the internet’s root DNS servers (mother ship DNS servers) where to ask for the rest of your domain’s DNS configuration (such as A, MX and TXT records). When a client (a web browser, a mail server, an FTP client etc.) wants to connect to part of your website, it asks the locally configured DNS server for the record –  the server in turn looks for the SOA records for your domain so it knows which DNS server to ask about it.

Consider these records as the source of “which DNS server stores all the information about the website I want to look up”.

Hostname (A and CNAME) records

A records store information about a hostname record for your domain name. These list the IP address that a client should talk to when using a certain hostname.

If you had an address of http://mywebsite.examplecompany.com into your web browser this would refer to the A record “mywebsite” on the domain “examplecompany.com”.

If you have multiple A records with the same hostname, clients will receive a list of all the records. The order of this list will change with iterate each time you query the DNS server – this is called round-robin DNS and it a simple way to spread load across multiple servers .

AAAA records are the same as A records, only they stores the 128bit IPv6 address of a server instead of the IPv4 IP address – as the world shifts to using IPv6 these records will gain more relevance, but if your webhost supports IPv6 its worth setting these records up now, so that any visitors using IPv6 can access your website.

A CNAME record (Canonical name) is basically  an alias for an A record. This tells whoever is asking, that the DNS information for the requested hostname is stored in another record somewhere else on the internet. This other record might not even be on the same domain name or on the same DNS server. CNAME’s are very powerful as they allow you simplify your domains DNS records by centralising the information somewhere else. ISPs and webhosts commonly use CNAMEs to centralise the DNS configuration storage for things like mail or web server’s by allowing you to keep all the configuration details on a parent domain name.

It is important to note that root records for a domain name (I.e the empty A record for mydomain.com) cannot be a CNAME. The simple hard and fast reason why, is that CNAME’s cannot live on the same node in a DNS forest as any other type of record – because the very nature of a CNAME record defines that all configuration for that node is stored somewhere else, and given you store other information at the root of your domain other than your A record (MX records for mail etc.) this would break every other record’s functionality. This is mentioned specifically in the RFC for DNS, section 3.6.2.

An example of CNAME usage, is when most webhosting company web servers have a hostname such as web0234.mywebhost.com

When setting up your website, your webhost might for instance make the “http://www.yourwebsite.com record for your website a CNAME that has the value web0234.mywebhost.com so that when trying to access “http://www.yourwebsite.com” DNS clients look up the IP address for “web0234.mywebhost.com”. This makes their life easier if the IP address for this web server changes, as they only have to update a single DNS record, instead of updating all their clients DNS records.

To reiterate this to make it crystal clear:
CNAME’s are not a redirection. They are a reference pointer for a hostname. All they tell DNS clients, is that the configuration information for the hostname being queried is the same as can be found by querying the other hostname.

Illustration – Visiting a website

In the case that you want to visit http://www.google.com your computer does the following:

  1. Using the local machine’s DNS client your operating system talks to the locally configured DNS server for your local network/ISP’s network.
  2. This DNS server inturn looks up the DNS server for google.com by first looking up the SOA record for google.com and then connecting to the DNS server listed.
  3. Your local DNS server then asks the DNS server for google.com for the A record listed for www – the google.com DNS server will return an IP address for http://www.google.com. Your ISP or local network’s DNS server, along with returning it to you, will then cache this record for as long as the TTL (time to live) property of the record says.
  4. Your browser then connects to this returned IP address listed on port 80 and asks for the web page.

All of the above happens in milliseconds – but you can understand that if the google.com DNS server is slow in responding this negatively affects your browsing experience.

A records and CNAME records have a TTL (Time To Live) property to indicate how long they can be cached for.

Mail (MX)

MX records are the internets way of telling mail where to be delivered. They list the hostname or IP address of the mail server that handles mail for a given domain name. If a mail server is looking to deliver mail to “examplecompany.com” it will look up the MX record for this domain.

MX records have both a TTL (Time To Live) and a Priority (a weighting to give the order in which they should be looked up).

Illustration – Sending an e-mail to a friend

In the case that you send an email to your friend at myfriend@otherexamplecompany.com your local SMTP mail server (usually at your ISP) does the following:

  1. Your mail server connects to its local network/ISP’s DNS server and asks for the MX record for otherexamplecompany.com.
  2. Your local DNS server or ISP’s DNS server looks up the SOA record for otherexamplecompany.com and then connects to the DNS server listed.
  3. It asks for the MX records for this domain and is returned a list of hostnames.
  4. it grabs the first hostname from the list (order in ascending order by Priority), runs a second query for the IP address of this mail server and returns this IP address to your mail server.
  5. your mail server then connects to this IP address on the SMTP TCP port 25 and delivers your mail.

Text Records (TXT)

TXT records are a powerful addition to the DNS standard that allow the storage of miscellaneous information for a hostname. Many web developers, system admins and the like use TXT records for the storage of information such as SPF records and DKIM public keys.

TXT records have a TTL (Time To Live) property to indicate how long they can be cached for.

Name Server Records (NS)

Name Server Records are placed in your domain’s DNS when you wish to store the configuration of part of your domain’s DNS on a separate DNS server. This can be very handy if you want to give control of a subdomain to another person/entity.

i.e. my site is http://www.widgetsareus.com and I manage all of the DNS for this domain, but I would like support.widgetsareus.com and any child sub domains of this domain to be managed by the company we outsource all of our customer support to – therefore I have setup an NS record for support.widgetsareus.com to point at our support partner’s DNS servers.

Setting up a domain from scratch

If you are setting up a domain you’ve just purchased from scratch you’ll need to do the following:

Setup your website (A records)

  1. Setup a DNS server to store the configuration for yourdomain.com
    This might be at your webhost, or might be a third party service such as DNSMadeEasy, ZoneEdit or DynDNS.
  2. Set the Nameserver SOA records for your domain name to the above DNS server’s IP address or hostname (at your domain registrar)
  3. Create a new root record to point at your webserver’s IP address (this is simply an A record with an empty hostname) in your domain name’s DNS forest.
  4. Create a new www A record that points at your webserver’s IP address in your domain’s DNS server
  5. Setup your webserver’s website to listen for the host-header of your domain name (IIS calls this a “binding”).
  6. Test your DNS as below.
  7. Try and access your site in a web browser.

Testing your website’s A record

In a command prompt/terminal type NSLOOKUP

Enter “yourdomainname.com.” (including the extra period on the end) and hit enter

Check that the returned record value/IP address is that of your web server.

image

Remember to do the same for “www.youdomainname.com.” if you also use www. in your domain name.

Setup your website’s mail (MX record)

  1. Setup a DNS server to store the configuration for yourdomain.com  (Follow steps 1 and 2 above from your website if you haven’t already).
  2. Create a new MX record that points at your mail servers IP address or hostname.
  3. Setup your mail server to listen to receive mail for yourdomain.com
  4. Test that all the above is setup correctly using nslookup as per below.
  5. Try and send and receive email to and from your domain name.
  6. Setup SPF records, to verify your mail server’s ability to send mail on behalf of your domain name

Testing your website’s MX record

In a command prompt/terminal type NSLOOKUP

Enter “set type=mx” and hit enter. This set the query type to MX records.

Enter “yourdomainname.com.” (including the extra period on the end) and hit enter

Check that the returned record value/IP address(es) is that of your mail server.

image

Investigating Common Problems

How do I check what DNS server is authorative for my domain name?

You’ve set up your websites DNS, everything is fine; then one day, everyone visiting your site is directed to a site that isn’t yours!

To check which DNS server is authorative for your domain name, first open a command prompt or terminal.

Type “NSLOOKUP” and hit enter

Type ”set type=ns” and hit enter. This sets the query type to NS (NameServer) records.

Type “yourdomainname.com.” and hit enter (make sure you put the extra dot on the end.)

Confirm that the nameserver’s returned are yours.

image

How do I check what IP address my site is currently pointing at?

In a command prompt/terminal launch NSLOOKUP

Enter “yourdomainname.com.” (including the extra period on the end) and hit enter

Check that the returned record value/IP address is that of your web server.

image

Remember to do the same for “www.youdomainname.com.” if you also use www. in your domain name.

What is split DNS?

Split DNS is when you run a separate DNS forest for a domain name both on your external DNS servers (for everyone else to see) and also internally for staff or local users to see.

This allows you to do things like:

  • Ensure local users talk to your mail server (or any other internal server) using the internal IP address, and internet users talk to your mail server’s external DMZ IP address.
  • Block access to certain sites by giving incorrect or different DNS results for these site’s domain names. This if often how many net nanny etc softwares work.

For some users my sites seems to be served from a different address – how do I check “what the world sees” vs. “what I see”?

Many things can occur that result in some people seeing different DNS results to others:

  • Your ISP/company’s DNS server may have an older cached record to the current live record
  • Your local computer may be caching the DNS record you are requesting
  • Your local DNS server may be fetching the records for your domain from a different authorative DNS server than the rest of the world.

How do you investigate these things?

The easiest way to investigate these things is to query an external DNS server that you know is good for the records you want, to get a better idea of how the rest of the world sees things.

A really good server that is easy to remember are the ones owned by Google. The primary and secondary DNS server for Google’s Public DNS system are “8.8.8.8” and “8.8.4.4” respectively.

You can use whatever DNS servers you think are more likely to see the correct values.

To do this, open a command prompt/console.

Type “NSLOOKUP” and hit enter

Type “server 8.8.8.8” and hit enter. This sets the DNS server we will query to the Google Public DNS server’s address.

Type “yourdomainname.com.” and check the resulting record values.

image

How To : Design the Physical Architecture to Support Collaborative Development and ALM of SharePoint Foundation 2010 Application

Introduction

This article explains the physical architecture which fits best in collaborative development and ALM of SharePoint Foundation 2010 application and what are the servers and tools needed and how they play key roles in ALM of SharePoint Foundation 2010. The purpose of this article is to provide overall understanding of various servers and farms connected to each other in SharePoint Foundation.

Background

Basic understanding of different server OS & SharePoint Foundation 2010 is required.

Solution

Application Life-cycle Management (ALM) is the co-ordination of development life-cycle activities—including requirements, modeling, development, build, and testing. Recently, ALM has expanded beyond the application and the software development life cycle to also include business solution governance, infrastructure management, operations, and support.

You can use ALM to help align your organization in the context of a software solution in business, development, and operations. With an application development platform that supports ALM, you can provide integration between the various tools used and activities performed within each of these capabilities.

There are main four types of staging servers with standalone developer’s environment which plays a key role in ALM of SharePoint 2010 application:

  1. Development SharePoint Farm
  2. Team foundation server
  3. Integration/Testing Farm
  4. Production Farm
    +
    Developer’s Workstation

The below figure is a physical architecture which depicts how each sever is interconnected to support collaborative development and ALM for SharePoint Foundation 2010 application:

Click to enlarge image

Development SharePoint Farm

A SharePoint farm is fundamentally a collection of SharePoint role servers that provide for the base infrastructure required to house SharePoint sites. The farm level is the highest level of SharePoint architecture, providing a distinct operational boundary for a SharePoint environment. Each farm in an environment is a self-encompassing unit made up of one or more servers, such as web servers, service application servers, and SharePoint database servers.

SharePoint development farm needed for the developers in an organization that makes heavy use of SharePoint often need environments to test new applications, web parts, solutions, and other SharePoint customization. These developers often need a sandbox area where these farm level features and solutions can be tested.

I have considered two-tier topology for SharePoint Foundation 2010 farm. However it will be entirely based on the need of your application. If your application is a relatively small intranet application, then you can choose single tier topology or if you are going to integrate other search server with foundation, then you can choose three-tier topology with application server as a middle tier (Remember that SharePoint Foundation 2010 doesn’t include enterprise search). It may make sense to deploy one or more development farms so that developers have the opportunity to run their tests and develop software for SharePoint independent of the existing production environment.

There are basically two types of servers included in two-tier development farm of SharePoint Foundation 2010:

  1. Web server
  2. Content database server

In the above figure, there are three front-end web servers and one SharePoint content database server. However you can choose a single front-end web server connected to content database server based on your application need and architecture of production environment. All web servers share the same content database. This is called two-tier deployment farm where SharePoint server component and content database are installed on separate server. As I mentioned before, you can choose one-tier, two-tier or three-tier deployment topology based on your application architecture and topology of production architecture.

Each web server has SharePoint Foundation 2010 and SharePoint extension for TFS 2010 install on it. It needs SharePoint extension for TFS 2010 to connect with Team Foundation Server for source control, build management & project management.

Advantage of Development SharePoint Farm:

  1. Single place where SharePoint Admin can integrate all the final artifacts from multiple developers.
  2. Developer can sync with latest SharePoint site on its standalone developer workstation.
  3. Admin can easily approve artifacts and migrate to integration server.
  4. It is a unit testing environment for developers where they can test dependent functionality or farm level features.

Team Foundation Server

Team Foundation Server plays a key role in ALM which provides source control, build management and work item. You can have TFS installed on the same server which has content database server but if you are going to use build management of TFS, then it is advisable to have separate Team Foundation Server because it utilizes CPU intensively when it processes the builds.

As per the above figure, there are separate Team foundation servers which are connected to SharePoint Farm as well as standalone development workstation so that it can provide source control for customized content as well as developer’s artifacts and resources.

Advantages of TFS
  1. Source control for SharePoint artifacts and customization
  2. Build management for SharePoint
  3. Work item and bug tracking tool for SharePoint
  4. Admin console for all management activity
  5. Easy integration with SharePoint foundation server and VS 2010
  6. Easy check-in & check-out
  7. Web based console to manage ALM activity

Developer’s Workstation

As per the above figure, developers’ environment includes two developers workstation. In practice, you can take as many workstations as your development team size.

Developer workstation should have Windows 7 or Windows vista operating system with standalone SharePoint foundation server with local content database. So that one developer’s work doesn’t affect another developer and he can debug artifacts locally.

Developer workstation will include the following stuff installed:

  1. Windows 7 or Windows vista 64 bit OS
  2. Stand alone SharePoint Foundation server 2010
  3. SharePoint designer 2010
  4. Visual Studio 2010 (connected to TFS)

Developer workstation should be connected to Team Foundation Server 2010 so that when developer finally completes his artifact, then he can check-in his artifact in TFS so that other developers can take the latest code from TFS if needed. This way, parallel development can happen without affecting other developer’s work.

Integration/Testing Farm

Any production SharePoint environment should have a test environment in which new SharePoint web parts, solutions, service packs, patches, and add-ons can be tested. It is critical to deploy test farms, because many SharePoint add-ons could potentially disrupt or corrupt the formatting or structure of a production environment, and trying to test these new solutions on site collections or different web applications is not enough because the solutions often install directly on the SharePoint servers themselves. If there is an issue, the issue will be reflected in the entire farm.

Integration or testing server farm should be similar to the existing environments, with the same add-ons and solutions installed and should ideally include restores of production site collections to make it as similar as possible to the existing production environment. All changes and new products or solutions installed into an environment should subsequently be tested first in this environment.

Integration/testing servers will have final SharePoint sites and site collection as per the business requirements. QA will test all the business functionality here. Customer can also do their ‘User acceptance test’ before going live to the production server.

After user acceptance test passed, all the sites & site collection will be deployed on production server.

Advantage of Integration testing server:

  1. Clean environments and same physical architecture as production
  2. QA can test all dependent business functionality at one place
  3. Customer can participate in UAT
  4. Easy deployment/migration from integration testing server to production server

Production Farm

The final stage is rolling your farm into a production environment. At this stage, you will have incorporated the necessary solution and infrastructure adjustments that were identified during the user acceptance test stage. These servers are generally in the customer’s premises. Development team and testing team do not have control over it.

There are various 3rd party tools available in the market for SharePoint data protection, administration, migration, compliance and integration.

ImageGen[1]

Summary

So this way, you can design physical architecture where Development SharePoint Farm and developer’s workstation are integrated with TFS 2010. TFS and Content database are connected to testing server or testing farm where all the artifacts and content will be integrated in testing server for QA and UAT. Finally after UAT, it will be deployed on production farm.

You can use VM (Virtual Machine) for all the servers and workstation for effective infrastructure because if server crashes due to some reason, then you can quickly create a new VM for the needed OS from images.

Note: In the above figure, integration/Testing farm and production farm is a single server just for clear understanding but it will be as large as development farm with number of front-end web server and content database server in reality. All the server OS is Windows Server 2008 R2 SP2 64 bit. Please visit here for more information on hardware & software requirements for SharePoint Foundation 2010.

A Look At : Federated Authentication

More and more organisations are looking to collaborate with partners and customers in their ecosystem to help them achieve mutual goals. SharePoint is a great tool for enabling this collaboration but many organisations are reluctant to create and maintain identities for users from other organisations just to allow access to their own SharePoint farm. It’s hardly surprising; identity management is complex and expensive.

You have to pay for servers to host your identity provider (Microsoft Active Directory if you are using Windows); you have to keep it secure; you have to back it up and ensure that it is always available, and you have to pay for someone to maintain and administer it. Identity management becomes even more complicated when your organisation wants to give external users access to SharePoint; you have to ensure that they can only access SharePoint and can’t gain access to other systems; you have to buy additional client access licenses (CALs) for each external user because by adding them to your Active Directory you are making them an internal user.

 

Imageare

Microsoft, Google and others all offer identity providers (also known as IdPs or claims providers) that are free to use, and by federating with a third party IdP you shift the ownership and management of identities on to them. You may even find that the partner or customer you are looking to collaborate with may offer their own IdP (most likely Active Directory Federation Services if they themselves run Windows). Of course, you have to trust whichever IdP you choose; they will be responsible for authenticating the user instead of you so you must be confident that they will do a good job. You must also check what pieces of information about a user (also known as claims; for example, name, email address etc) IdPs offer to ensure they can tell you enough about a user for your purposes as they don’t all offer the same.

Having introduced support for federated authentication in SharePoint 2010, Microsoft paved the way for us to federate with third party IdPs within SharePoint itself. Unfortunately, configuring SharePoint to do this is fiddly and there is no user interface for doing so (a task made more onerous if you want to federate with multiple IdPs or tweak the configuration at a later date). Fortunately Microsoft has also introduced Azure Access Control Services (ACS) which makes the process of federating with one or more IdPs simple and easy to maintain. ACS is a cloud-based service that enables you to manage the IdPs used by your applications. The following diagram illustrates, at a high-level, the components of ACS.

An ACS namespace is a container for mappings between IdPs and one or more relying parties (the applications that want to use ACS), in our case SharePoint. Associated with each mapping is a rule group with defines how the relying party handles the individual claims associated with an identity. Using rule groups you can choose to hide or expose certain claims to specific relying parties within the namespace.

So by creating an ACS namespace you are in effect creating your own unique IdP that encapsulates the configuration for federating with one or more additional IdPs. A key point to remember is that your ACS namespace can be used by other applications (relying parties) that want to share the same identities, not just SharePoint. 

Once your ACS namespace has been created you need to configure SharePoint to trust it, which most of the time will be a one off task and from that point on you can manage and maintain the IdPs you support from within ACS. The following diagram illustrates, at a high-level, the typical architecture for integrating SharePoint and ACS.

 

In the scenario above the SharePoint web application is using two different claims providers (they are referred to as claims providers in SharePoint rather than IdPs). One is for internal users and trusts an internal AD domain and another is for external users and trusts an ACS namespace.

When a user tries to access a site within the web application they will get the default SharePoint Sign In page asking them which provider they want to use.

This page can be customised and branded as required. If the user selects Windows Authentication they will get the standard authentication dialog. If they select Azure Provider (or whatever you happen to have called your claims provider) they will be redirected to your ACS Sign In page.

Again this page can be customised and branded as required. By clicking on one of the IdPs the user will be redirected to the appropriate Sign In page. Once they have been successfully authenticated by the IdP they will be redirected back to SharePoint.

 

Conclusion

By integrating SharePoint with ACS you can simplify the process of giving external users access to SharePoint. It could also save you money in licence fees and administration costs[i].

An important point to bear in mind when planning federated authentication for SharePoint is that in order for Search to be able to index content within SharePoint, you must enable Windows authentication on at least one zone within your web application. Also, if you use a reverse proxy to perform authentication, such as Microsoft Threat Management Gateway, before allowing traffic to hit your SharePoint servers, you will need to disable the authentication checks

 

[i] The licensing model for external users differs between SharePoint 2010 and SharePoint 2013. With SharePoint 2010 if you expose your farm to external users, either anonymously or not, you have to purchase a separate licence for each server. The license covers you for any number of external users and you do not need to by a CAL for each user. With SharePoint 2013, Microsoft did away with the server license for external users and you still don’t need to buy CALs for the external users.