EUC Weekly Digest – June 24, 2017

Last Modified: Jun 24, 2017 @ 7:40 am

Here are some EUC items I found interesting last week. For more immediate updates, follow me at http://twitter.com/cstalhood.

For a list of updates at carlstalhood.com, see the Detailed Change Log.

 

XenApp/XenDesktop

VDA

App Layering (Unidesk)

Director/Monitoring

WEM/Profile Management

Provisioning Services

Receiver

NetScaler

NetScaler Gateway

XenMobile

VMware

NetScaler Essential Concepts: Part 2 – Certificates/SSL, Authentication, HTTP, VPN Networking, PXE, GSLB

Last Modified: Jun 25, 2017 @ 5:33 pm

Navigation

HTTP Encryption

SSL Protocol, and Keys

SSL/TLS ProtocolSSL (Secure Sockets Layer) and TLS (Transport Layer Security) are two names for encrypted HTTP. SSL is the older, more well-known name, and TLS is the newer, less well-known name. The names can usually be used interchangeably, although pedantic people will insist on TLS instead of SSL. In NetScaler, you’ll mostly see the term SSL instead of TLS.

  • Port 443 – Web Servers that support SSL/TLS protocol listen for encrypted HTTP on TCP 443. Enabling SSL/TLS on a web server requires creation of keys/certificate, and binding them to a TCP 443 listener.
  • https protocol – Users enter https://FQDN into a web browser to connect to a web server using SSL/TLS protocol on TCP 443. Technically, https = HTTP on top of SSL/TLS.
  • Disable http? – You can optionally disable clear-text HTTP over TCP 80. Or you can leave the TCP 80 listener and configure it to redirect TCP 80 connections to TCP (https) 443.

SSL/TLS version – There are several versions of the SSL/TLS protocol. TLS v1.2 is the newest, but TLS v1.3 is coming soon. PCI compliance dictates that TLS v1.0 and v1.1 should be disabled, but not all software is compatible with TLS v1.2 yet. SSL v3 is the older protocol, and must be disabled on all SSL vServers. All versions of TLS are newer than SSL v3.

  • SSLLabs.com can check your SSL Listener to make sure it adheres to the latest SSL security standards.

SSL Session – HTTP is transmitted on top of SSL/TLS. First, the SSL Client and SSL Server create an encrypted SSL Session. Then HTTP is transmitted across this SSL Session.

  • SSL Handshake is expensive – Establishing the SSL session (SSL Handshake) is an expensive (CPU) operation, and modern web servers and web browsers try to minimize how often it occurs, preferably without compromising security.
  • Bulk encryption – The traffic on top of the established SSL Session is bulk encrypted, which is far less impactful than initial SSL Session establishment.
  • SSL Specs – The NetScaler data sheets provide different numbers for SSL Transactions/sec (initial session establishment) and SSL Throughput (bulk encryption).

Public/Private key pair – The SSL vServer needs a public/privatekey pair. The Public Key and Private Key are linked together. Data encrypted by Public Key can only be decrypted by one Private Key. Data encrypted by the Private Key can only be decrypted by one Public Key.

  • Private key – The Private Key is called private because it needs to remain private. You must make sure that the Private Key is never revealed to any unauthorized individual. If that were to occur, the unauthorized person could use the Private Key to emulate the web server, and unsuspecting users might submit private data, including passwords, to the hacker.
  • Public key – The Public Key is called public because it doesn’t matter who has it. The public key is worthless without also having access to the private key.

Key size – When you create a public/private key pair, you specify a key size in bits (e.g. 2048 bits). The higher the bit size, the harder it is to crack. However, larger key sizes mean exponentially more processing power required. 2048 is the current recommended key size, even for Certificate Authorities. 2048 balances security with performance.

Private/Public Key vs Session Key – The Public/Private key pair is only used during initial SSL session establishment. The bulk of the encryption is performed using a new Session Key instead of the Public/Private key pair. The Session Key is generated by the SSL Client and sent to the SSL Server so both parties know the Session Key. Traffic encrypted by the Session Key can only be decrypted by the Session Key.

  • Session Keys are Symmetric Encryption – Since only one Session Key is used to both encrypt and decrypt, this is called Symmetric Encryption.
  • Public/Private Keys are Asymmetric Encryption – In a public/private key pair, different keys are used for encryption and decryption. This is called Asymmetric Encryption.

How the SSL Session Key is generated

  1. The SSL Client initiates the SSL Connection with the SSL Server.
  2. The SSL Server sends its Public Key to the SSL Client, usually in a certificate.
  3. The SSL Client generates a random SSL Session Key.
  4. The SSL Client, encrypts the Session Key using the SSL Server’s Public Key, and sends the encrypted Session Key to the SSL Server.
  5. The SSL Sever uses its Private Key to decrypt the SSL Session key.
  6. Now both sides know the Session Key and can use that Session Key to perform bulk encryption.

Session Key size – the Session Key is much shorter than the private/public keys. For example, the Session Key can be 256 or 384 bits, while the public/private keys are 2048 bits. Because of their shorter size, Session Keys are thus much faster than public/private keys. The length of the Session Key depends on the negotiated cipher, as detailed later.

Renegotiation – SSL Clients and SSL Servers will sometimes want to redo the SSL Handshake while in the middle of an SSL Session. This is called Renegotation.

Forward Secrecy:

  • Without Forward Secrecy, if you take a packet trace, you can use the SSL Server’s Private Key to decrypt Session Keys in the packet trace, and use that decrypted Session Keys to decrypt the rest of the packet trace.
  • With Forward Secrecy, the public/private key pair are generated every time a new SSL Session is established. Thus every session has a different private key. It’s not possible to find the per-session private key, and use it to decrypt the Session Key.
  • DH vs RSA – Diffie-Hellman (DH) Key Exchange algorithm enables Forward Secrecy. RSA Key Exchange does not. ECDHE is the modern version of DH Key Exchange that is preferred by security professionals.

Certificates

Public Key in Server Certificate – The SSL Server’s Public Key is included in the SSL Sever’s Certificate, which is transmitted to the SSL Client when the SSL Client initially connects. The certificate contains the public key, plus additional fields like: Subject, Issuer, Validity Dates, CA Signature, Certificate Revocation Location, etc.

  • Keys and certificates are different things – The public/private key pair, and the certificate, are two different things. First, you create a public/private key pair. Then you create a certificate that contains the public key. Windows hides this operation from you so many Windows admins are not aware that keys are separate from the certificate. However, NetScaler is UNIX-based, which stores the keys and certificate in separate files.

Certificates provide a form of authentication – The SSL Client needs to authenticate the SSL-enabled web server so that the SSL Client only sends confidential data to trusted web servers. There are several fields in the SSL Server’s certificate that clients use to verify web server authenticity. Each of these is detailed later in this section.

  • FQDN – the hostname entered in the browser’s address bar must match the FQDN in the server’s certificate.
  • CA Signature – the server’s certificate must be signed a trusted Certificate Authority that has verified the owner of the website.
  • Validity Dates – the certificate must not be expired.
  • Revocation – the certificate must not be revoked.

Types of certificates – there are different types of certificates for different use cases. All certificates are essentially the same, but some of the certificate fields control how a certificate can be used:

  • Server Certificates – when linked to a private key, these certificates enable encrypted HTTP.
  • CA Certificates – used by a SSL client to verify the CA signatures contained within a Server Certificate.
  • Client Certificates – used by clients to authenticate the client machine or user to the web server. Requires a private key on the client side.
  • SAML Certificate – self-signed certificate exchanged to a different organization to authenticate SAML Messages (federation).
  • Code Signing Certificate – developers use this certificate to sign the applications they developed.

Digital Signatures – Signatures are used to verify that a file has not been modified in any way. A Hashing Algorithm (e.g. SHA256) produces a hash of the file. A Private Key encrypts the hash. When a machine receives the signed file, the receiving machine generates its own hash of the file. Then it decrypts the file’s signature using the Public Key that is linked with the Private Key that encrypted it, and compares the hash in the file with the hash that the receiving machine generated. If a third party used it’s Private Key to encrypt the hash, then the receiving machine needs a copy of the third-party’s Public Key to decrypt the hash.

  • Third party Signatures – A file can be hashed and signed by the same organization that created the file. Or the file can be hashed and signed by a different organization (third party). When a third party signs a file that it did not generate, the assumption is that the 3rd party verified the authenticity of the file. Only the third party’s Public Key can be used to verify the signature.

Certificate Signature – Certificates are usually digitally signed by a trusted third party, formally known as the Certificate Authority. This third party verifies that the organization that produced the certificate actually owns the server that is hosting the certificate. The name of the Certificate Authority is contained in the Issuer field of the certificate.

  • CAs use the CA’s Private Key to sign a certificate – The CA generates a hash of the certificate and signs the hash using the CA’s Private Key. Later, the client machine will use the CA’s Public Key to verify the hash.
    • Self-signed certificate – Instead of using a third party’s private key to encrypt the hash, it’s also possible for a certificate to use its own key pair to sign its own certificate. In this case, the Issuer of the certificate and the Subject of the certificate are the same. Most browsers will not accept self-signed certificates.
  • CA Chain – The CA Certificate (with private key) that signed the Server Certificate can itself be signed by a different CA Issuer with its own private key. In this case, the Issuer of the CA’s certificate is a different CA certificate. For example: Intermediate CA 2 signs your Server Certificate. Intermediate CA 1 signs Intermediate CA 2’s certificate. Root CA signs Intermediate CA 1’s certificate.
    • Root CA certificates are self-signed– As you go up the CA certificate chain, you’ll eventually reach a Root CA certificate that is signed by itself, and not by a different CA certificate. Thus all CA Root Certificates are self-signed.
  • Root CA certs are installed on SSL clients. CA’s pay Microsoft, Chrome, Mozilla, Apple, etc. to install their Root CA Certificates on client machines. The Root CA certificates contain the Public Keys that clients use to verify CA signatures on certificates.
  • Intermediate CA certificates – In the CA Signature Chain, between the Server Certificate and the Root CA Certificate are Intermediate CA Certificates. These Intermediate CA certificates are not installed on client machines. Instead, they must be transmitted to the SSL Client by the SSL Server. On a NetScaler, you link Intermediate CA Certificates to the Server Certificate, which causes the NetScaler to transmit multiple certificates during the SSL Handshake. On IIS, you simply install the Intermediate Certificate in the web server computer’s Intermediate CA certificate store.
  • The Root CA certificate must never be transmitted by the SSL Server; only the Intermediate CA certificates must be transmitted. The Root CA certificate is already installed on the client machine through an out-of-band operation (e.g. included with the operating system). If the Root CA certificate could be transmitted by the SSL Server, then the entire third party trust model is broken. (SSL Labs will report this as “Chain issues: Contains anchor”.)
  • Internal Certificate Authority – you could build your own internal Certificate Authority and use it to sign your certificates. This can be a Microsoft CA server. Or you can use your NetScaler as a CA.
    • Internal CA Root Cert certificate distribution – Since your CA’s Root Certificate is not installed on the client devices, your internally-signed certificates won’t be trusted. You must find some out-of-band mechanism for distributing the CA Root Certificate. Typically this distribution is performed using Group Policy.
  • Self-signed certificates, and internally-signed certificates, are two different things. If you build your own Certificate Authority, and use it to sign your certificates, that is not self-signed, because the Issuer and Subject are different. Self-signed only occurs when the Issuer and the Subject are the same.

To create a certificate – The process to create a certificate is as follows:

  1. Create keyfile – Use OpenSSL or similar to create a public/private key pair file.
  2. Create CSR – Use OpenSSL or similar to create a Certificate Signing Request (CSR). The CSR contains the public key, and several other fields, including: server’s DNS name, and name of the Organization that owns the server.
  3. Send CSR to CA – The CSR is sent to a Certificate Authority (CA) to get a signature. Public CAs usually charge a fee for this service.
  4. CA verifies ownership – The CA verifies that the Organization Name specified in the CSR actually owns the server’s DNS name. One method is to email somebody at the same organization. More stringent verifications include background checks for the DUNS name. Higher verification usually requires a higher fee.
  5. CA signs the certificate – If owner verification is successful, then the CA signs the certificate and sends it back to the administrator. The CA can use multiple chained CA Certificates to sign your Server certificate.
  6. Complete certificate request – The administrator links the signed certificate with the keyfile. In IIS, this is called “Complete Certificate Request”. In NetScaler, when installing a cert-key pair, you browse to both the certificate file, and the key file.
  7. Create a TCP/SSL 443 listener and bind the certificate to it – Configure the web server to use the certificate. In IIS, add a https binding to the Default Web Site and select the certificate. In NetScaler, create a SSL vServer, and bind the certificate to it.

Keys and Certificates are stored in separate files – On UNIX/Linux, certificates/keys are stored in two files – a key file (.key extension), and a certificate file (.pem, .cer, .cert extension). Both are Base64 encoded, which is also known as PEM. PEM is the native format that NetScaler uses.

  • Private Keys on Windows – On Windows, the keys are stored in a hidden portion of the file system that is not intended to be accessed by users. Instead, you can export a certificate and keys to a password-encrypted PFX file, which is also known as PKCS#12. Newer versions of NetScaler can directly import PFX files, while older versions of NetScaler require the PFX files to be converted to PEM format first.
  • Private key linked to certificate? – On Windows, when you double-click a certificate in the IIS Console, or the Certificate MMC snap-in, the bottom of the first General page indicates if there’s a private key linked to this certificate. Only certificates with private keys can be used with IIS website hosting. Certificates without private keys are usually CA certificates (root or intermediate).
  • DER Format – On Windows, if your certificate doesn’t have a private key associated with it, then you can save the certificate to a file in DER format, or in Base64 (PEM) format. Newer versions of NetScaler can directly import DER files automatically, while older versions of NetScaler require you to specify DER format manually while importing the certificate file.
  • Certificate/key file storage on NetScaler – On NetScaler, certificate files and key files are stored in /nsconfig/ssl. The NetScaler configuration for SSL certificates points to files in this directory.

Private Keys should be encrypted – the key file that contains the Private Key should be encrypted, usually with a password. On NetScaler, when creating a key pair, you specify PEM Encoding, and set it to 3DES (Triple DES) encryption. You enter a permanent password, which is used to encrypt the private key.

  • When converting a PFX file to PEM, the new private key in the PEM file should be encrypted. Specify a 3DES password when performing this conversion.
  • Hardware Security Module (HSM) – Another storage option for private keys is a Hardware Security Module (HSM). HSMs are physical devices that can’t be physically compromised. The NetScaler FIPS appliances include a HSM module inside the appliance. Or, you can connect a NetScaler to a network-addressable HSM appliance. The HSM performs all private key operations so that the private key never leaves the HSM device.
  • Smart Card – Another storage option for private keys is a Smart Card. Smart Cards require the user to enter a PIN to unlock the smart card to use the private key, similar to an HSM. Smart Cards are typically used with Client Certificates, which are detailed later.

Certificate Fields

Subject field – One of the fields in the Server Certificate is called Subject. On the far right of the subject is CN=, which means Common Name, which might be familiar to LDAP (e.g. Active Directory) administrators. The Common Name is the DNS name of the web server.

  • User enters FQDN in browser address bar – User opens a browser and enters https://FQDN in the browser’s address bar to connect to the web server.
  • FQDN in browser’s address bar is matched to Certificate Subject Name – The Server Certificate is downloaded from the web server. The Common Name of the certificate is then checked to make sure it matches what the user entered in the browser’s address bar. If they don’t match, then a certificate error is displayed.

Subject Alternative Names – Another related Certificate field is called Subject Alternative Names (SAN). The Subject field (Common Name) only supports a single DNS name. SAN can support as many DNS names as desired.

  • Public CAs charge extra for each additional SAN name. And there are restrictions on the names can be added to the SAN field.
  • Common Name must be added to SAN list – When you create a CSR and specify a Common Name, Public CAs always add this Common Name to the SAN list. Chrome recently started requiring that the FQDN in the browser’s address bar to be in the Server Certificate’s SAN list. For Public CA-issued certificates, this is not a problem, since they automatically add the name to the SAN list. However, if you issued certificates from an internal CA, then the FQDN might be missing from the SAN List.

Wildcard certificate – The Certificate Common Name can be a wildcard (e.g. *.company.com) instead of a single DNS name. This wildcard matches all FQDNs that end in .company.com. The wildcard only matches one word and no periods to the left of .company.com. It will match www.company.com, but it will not match www.gslb.company.com, because there’s two words instead of one. It will also not match company.com because it requires one word where the * is located.

  • Wildcard certificates cost more from Public CAs than single name certificates and SAN certificates.
  • Wildcard certificates are less secure than single name certificates, because you typically use the same wildcard certificate on multiple web servers, and if one of the servers is compromised, then all are compromised.

CA Signature verification – In addition to verifying the certificate Common Name, the browser also verifies that it trusts the CA signatures. On every client machine is a pre-installed list of CA root certificates. These root certificates are used to verify the CA signatures on the certificate. The signature is created with the CA’s private key, and the signature is verified using the CA’s public key, which is in the CA’s root certificate.

  • Certificate chain – The SSL server’s certificate might be signed by more than one CA certificate, which are linked together in a chain. The SSL cert is signed by one of more intermediate certificates, which are signed by a root certificate.
  • Intermediate certificates – The NetScaler must be configured to send both the SSL cert and the intermediate certificates (usually only one intermediate certificate). The root certificate is already on the client machine so the NetScaler must not send the root. To send both SSL cert and intermediate, you install the intermediate certificate on the NetScaler, and Link them together.

Validity Dates – Another field in the certificate is expiration date. If the date is expired, then the client’s browser will show a certificate error. When you purchase a certificate, it’s only valid between 1 year and 5 years, with CAs charging more for longer terms.

  • Expiration Warning – NetScaler MAS can alert you when a certificate is about to expire. The CA will also remind you to renew it.
  • Renew with existing keys? – When renewing the certificate, you can create a new SSL key pair, or you can use the existing key pair. If you create a new key pair, then you need to create a new CSR and submit it to the CSR. If you intend to use the existing keys, then simply download the updated certificate from the CA after paying for the later expiration date.

Certificate Revocation – certificates can be revoked by a CA. The list of revoked certificates is stored at a CA-maintained URL. Inside each SSL certificate is a field called CRL Distribution Points, which contains the URL to the Certificate Revocation List (CRL). Client browsers will connect to the CRL to verify that the SSL server’s certificate has not been revoked. Revoking is usually necessary when the web server’s private key has been compromised.

  • CAs might revoke a certificate when Rekey  – if you rekey a certificate, the certificate with the former keys might be revoked. This can be problematic for wildcard certificates that are installed on multiple machines, since you only have a limited time to replace all of them. Pay attention to the CA’s order form to determine how long before the prior certificate is revoked.
  • Online Certificate Status Protocol – An alternative form of revocation checking is Online Certificate Status Protocol. The address for the OCSP server can be found in the certificate’s Authority Information Access field.

Client Certificate – Another type of certificate is Client Certificate. This is a certificate with private key, just like a Server Certificate. Client Certificates are installed on client machines, and usually are not portable (the private key can’t be exported). Client machines use the Client Certificate to authenticate with web servers. This client certificate authentication can simply verify the presence of the certificate on a corporate managed device. Or the user’s username can be extracted from the client certificate and used to authenticate the user.

  • Smart cards have client certificates with private key installed on them.  The client certificate on the smart card authenticates the user to the web server (or NetScaler). Users enter a PIN number to unlock the smart card to use the client certificate’s private key. Smart Cards eliminate needing to enter a password to authenticate with a web server.
  • Virtual Smart Cards – There are also virtual smart cards, which use hardware features of the client device to protect the client certificate. The client’s TPM (Trusted Platform Module) encrypts the private key. If the client certificate were moved to a different device, then the new device’s TPM wouldn’t be able to decrypt the private key. Windows Hello for Business and Passport for Work is an example of this technology.

SSL Ciphers

Cipher Suites are negotiated – During the SSL Handshake, the SSL Client and the SSL Server negotiate which cipher algorithms to use. A detailed explanation of ciphers would require advanced mathematics, but here are some talking points:

  • Recommended list of ciphers – Security professionals (e.g. OWASP, NIST) publish a list of the recommended, adequately secure, cipher suites. These ciphers are chosen because of their very low likelihood of being brute force decrypted within a reasonable amount of time.
  • Higher security means higher cost. For SSL, this means more CPU on both the web server (NetScaler), and SSL Client. You could go with high bit-size ciphers, but they require exponentially more hardware.
  • Each cipher suite is a combination of cipher technologies. There’s a cipher for key exchange. There’s a cipher for bulk encryption. And there’s a cipher for message authentication. So when somebody says “cipher”, they really mean a suite of ciphers.

Ephemeral keys and Forward SecrecyECDHE ciphers are ephemeral, meaning that if you took a network trace, and if you had a the web server’s private key, it still isn’t possible to decrypt the traffic. This is also called Forward Secrecy.

  • DHE = Ephemeral Diffie Hellman, which provides Forward Secrecy.
  • EC = Elliptical Curve, which is a formula that allows smaller key sizes, and thus faster computation.
  • How to trace when Forward Secrecy is enabled? – Since Forward Secrecy packet traces can’t be decrypted, when you want to trace SSL traffic for troubleshooting, you must first disable all ECDHE cipher suites.

Ciphers in priority order – The SSL server is configured with a list of supported cipher suites in priority order. The top cipher suite is preferred over lower cipher suites.

  • The highest common cipher between SSL Server and SSL Client is chosen – When the SSL Client starts an SSL connection to an SSL Server, the SSL Client transmits the list of cipher suites that it supports. The SSL Server then chooses the highest matching cipher suite. If neither side supports any of the same cipher suites, then the SSL connection is rejected.

Citrix-recommended cipher suites – See https://www.citrix.com/blogs/2016/06/09/scoring-an-a-at-ssllabs-com-with-citrix-netscaler-2016-update/ for the list of cipher suites that Citrix currently recommends. Every SSL vServer created on the NetScaler should be configured with this list of cipher suites in the order listed.

  • GCM ciphers seem to be preferred over CBC ciphers.
  • EC (Elliptical Curve) ciphers seem to be preferred over non-EC ciphers.
  • Ephemeral ciphers seem to be preferred over non-Ephemeral ciphers.

Authentication

NetScaler Authentication Overview

NetScaler supports several different authentication mechanisms.

Single Sign-on – Once a user has been authenticated by NetScaler, NetScaler can usually perform Single Sign-on to the back-end resource (webpage, Citrix StoreFront, etc.).

  • Different authentication methods for client and server – The authentication method used to authenticate the user doesn’t have to match the authentication method used for Single Sign-on. For example, NetScaler can use LDAP+RADIUS on the client side, and convert it to Kerberos on the server side.

LDAP for Active Directory authentication – LDAP is just one of the authentication mechanisms supported by Active Directory. Another common authentication mechanism is Kerberos. Kerberos and LDAP are completely different technologies.

  • HTML logon page – LDAP credentials are typically entered in a HTML Form logon page.

RADIUS is typically used for two-factor authentication (e.g. passcodes).

  • Enable RADIUS in the two-factor product – To integrate a two-factor authentication product with NetScaler, you typically enable RADIUS protocol in the two-factor authentication product.

RADIUS is typically combined with LDAP.

  • A single HTML logon page can request the AD password and two-factor passcode at the same time.
  • NetScaler logs the user into Active Directory using LDAP.
  • Two-factor passcodes are verified using RADIUS.

SAML is primarily used to federate authentication between two or more organizations. The resources the user is trying to access (web sites) is in one organization, but authentication is handled in a different organization.

  • With SAML, the user’s password stays at the authenticating organization. The resource organization never sees the user’s password.
  • Trust between the two organizations is provided by certificates. The two organizations share certificates with each other to verify each other’s identity (signatures).

Kerberos is another method of authenticating with Active Directory.

  • Kerberos and LDAP authentication are completely different authentication methods.
  • To perform Kerberos authentication, the client machine must be able to communicate with a Domain Controller. This means Kerberos usually doesn’t work on the Internet, at least not without a VPN connection.
  • If NetScaler uses Kerberos to Single Sign-on to the back-end web servers, then NetScaler needs connectivity to internal DNS and internal Domain Controllers.

OAUTH allows a user to delegate credentials to a program. The program can log into a different Service using the user’s credentials. For example, Outlook with ShareFile Plug-in can automatically log into ShareFile without prompting the user for ShareFile credentials every time Outlook communicates with ShareFile.

  • OpenID Connect is an authentication mechanism for OAUTH. It’s very similar to SAML, except it uses JavaScript Web Tokens (JWT) instead of SAML Tokens (XML-based).

NSIP is Source IP – By default, NetScaler uses its NSIP (management IP) as the Source IP when communicating with authentication servers.

  • To use SNIP, load balance – You can force authentication to use SNIP by load balancing the authentication servers, even if there’s only one authentication server.

LDAP (Lightweight Directory Access Protocol)

LDAP authentication process:

  1. HTML Form to gather credentials – NetScaler prompts user to enter a username and password, typically from an HTML Form.
  2. Connect to LDAP Server – NetScaler connects to the LDAP Server on TCP 389 or TCP 636, depending on if encryption is enabled or not.
  3. NetScaler logs into LDAP using a Bind account. This LDAP Bind account is an Active Directory service account whose password never expires. The only permissions it needs is to be able to search the LDAP directory, which is provided by being a member of the Domain Users group.
  4. NetScaler sends an LDAP Query to the LDAP Server. The LDAP Query asks the LDAP Server to find the username that was entered in step 1. The LDAP Server returns the user’s full Distinguished Name (DN), which is the full path to the user’s account in the directory.
    • NetScaler’s LDAP Search Filter controls the LDAP Query that is sent in step 4. A common usage is to only return users that are members of a specific AD Group.
  5. Login as user’s DN – NetScaler reconnects to the LDAP Server but this time logs in as the user’s DN (from step 4), and the user’s password (from step 1).
  6. Extract attributes – After authentication, NetScaler can be configured to extract attributes from the user’s LDAP account. A common configuration is to extract the user’s group membership. Another is to get the user’s userPrincipalName so it can be used during Single Sign-on to back-end web servers.

Password expiration requires LDAP encryption – If the user’s password has expired, then NetScaler can prompt the user to change the password. However, the user’s password can only be changed if the LDAP connection is encrypted, which means certificates must be installed on the LDAP servers (Active Directory Domain Controllers).

  • NetScaler does not inform the user how long before the user’s password expires.

LDAP Communication Protocols:

  • Clear text LDAP connects to the LDAP Server on TCP 389.
  • Two encrypted LDAP protocols – If the LDAP server has a certificate, then you can use one of two different protocols to connect to the LDAP Server. NetScaler supports both encrypted connection methods.
    • LDAPS – this is encrypted LDAP. It’s a different port than LDAP, just like HTTPS is a different port than HTTP. LDAPS is typically TCP 636 on the LDAP Server.
    • LDAP TLS – it starts as a clear text connection to the LDAP Server on TCP 389. Then both sides of the connection negotiate encryption parameters, and switch to encrypted communication on TCP 389.

RADIUS (Remote Authentication Dial-In User Service)

RADIUS authentication process:

  1. HTML Form to gather credentials – NetScaler prompts user to enter username and passcode, typically from an HTML Form.
  2. Send login request to RADIUS server – NetScaler sends a login request (Access-Request) to the RADIUS Server on UDP 1812. Since it’s UDP, there’s no acknowledgment from the Server.
    1. Passcode encryption using shared secret – The user’s passcode in the RADIUS packet is encrypted using a shared secret key that is configured on the NetScaler and RADIUS Server. The secret key entered on NetScaler must match the secret key configured on the RADIUS Server. Each RADIUS Client usually has a different secret key.
    2. RADIUS Attributes – The RADIUS Client (NetScaler) adds RADIUS attributes to the packet to help the RADIUS Server identify how the user is connecting.
  3. RADIUS Server:
    1. RADIUS Clients configured on RADIUS Server – The RADIUS Server first verifies that the RADIUS Client (NetScaler) is authorized to perform authentication. The NAS IP (NetScaler Source IP) of the RADIUS Access-Request packet is compared to the list of RADIUS Clients configured on the RADIUS Server. If there’s no match, RADIUS does not respond.
    2. Shared secret – The RADIUS server finds the RADIUS Client and looks up the shared secret key. The secret key decrypts the passcode in the RADIUS Access-Request packet.
    3. Verify RADIUS Attributes – RADIUS Server uses the RADIUS Attributes in the Access-Request packet to determine if authentication should be allowed or not.
    4. Authenticate the user – RADIUS authenticates the user. Most RADIUS Server products have a local database of usernames and passwords. Some can authenticate with other authentication providers, like Active Directory.
    5. Access-Accept and Attributes – RADIUS sends back a Access-Accept message. This response message can include RADIUS Attributes, like a user’s group membership.
  4. RADIUS Challenge – RADIUS Servers can send back an Access-Challenge, which asks the user for more information. NetScaler displays the RADIUS-provided Challenge message to the user, and sends back to the RADIUS Server whatever the user entered.
    1. SMS authentication uses RADIUS Challenge. The RADIUS server might send a SMS passcode to a user’s phone using SMS. Then RADIUS Challenge prompts the user to enter the SMS passcode.
  5. Extract RADIUS Attributes – NetScaler can be configured to extract the returned RADIUS Attributes and use them for authorization (e.g. AAA Groups).

RADIUS Client – RADIUS will not work unless you ask the RADIUS administrator to add NetScaler NSIP (or SNIP if load balancing) as a RADIUS Client.

  • Secret key – The RADIUS administrator then gives you the secret key that was configured for the RADIUS Client. You enter this secret key in NetScaler when configuring RADIUS authentication.

SAML (Security Assertion Markup Language)

SAML uses HTTP Redirects to perform its authentication process. This means that Web Clients that don’t support Redirects (e.g. Citrix Receiver) won’t work with SAML.

SAML SP – The resource (webpage) the user is trying to access is called the SAML SP (Service Provider). No passwords are stored here.

SAML iDP – The authentication provider is called the SAML iDP (Identity Provider). This is where the usernames and passwords are stored.

SAML SP Authentication Process:

  1. User tries to access a NetScaler VIP (NetScaler Gateway, or Load Balancing/AAA) that is configured for SAML SP Authentication.
  2. NetScaler creates a SAML Authentication Request and signs it using its certificate (with private key).
  3. NetScaler sends to the user’s browser the SAML Authentication Request, and a HTTP Redirect (301) that tells the user’s browser to go to the SAML iDP’s authentication Sign-on URL.
  4. The user’s browser redirects to the iDP’s Sign-on URL and gives it the SAML Authentication Request that was provided by the NetScaler.
  5. The SAML iDP verifies that the SAML Authentication Request was signed by the NetScaler’s certificate.
  6. The SAML iDP authenticates the user. This can be a webpage that asks for two-factor authentication. Or it can be Kerberos Single Sign-on. It can be pretty much anything.
  7. The SAML iDP creates a SAML Token (aka SAML Assertion) containing SAML Attributes. At least one of the attributes is Name ID, which usually matches the user’s email address. The SAML iDP can be configured to send additional attributes (e.g. group membership).
  8. The SAML iDP signs the SAML Token using its certificate (with private key).
  9. The SAML iDP sends the SAML Token to the user’s browser and asks the browser to Redirect (301) back to the SAML SP’s Assertion Consumer Service (ACS) URL, which is different from the original URL that the user requested.
  10. The user’s browser redirects to the ACS URL and submits the SAML Token.
  11. The SAML SP verifies that the SAML Token was signed by the SAML iDP’s certificate.
  12. The SAML SP extracts the Name ID (email address) from the SAML Token. Note that the SAML SP does not have the user’s password; it only has the user’s email address.
  13. The SAML SP sends back to the user’s browser a cookie that indicates that the user has now been authenticated.
  14. The SAML SP sends back to the user’s browser a 301 Redirect, which redirects the browser to the original webpage that the user was trying to access in step 1.
  15. The user’s browser submits the cookie to the website. The website uses the cookie to recognize that the user has already been authenticated, and lets the user in.

Multiple SAML SPs to one SAML iDP – The SAML iDP could support connections from many different SAML SPs, so the SAML iDP needs some method of determining which SAML SP sent the SAML Authentication Request. One method is to have a unique Sign On URL for each SAML SP. Another method is to require the SAML SP to include an Issuer Name in the SAML Authentication Request. In either case, the SAML iDP looks up the SAML SP’s information to find the SAML SP’s certificate (without private key), and other SP-specific information.

SAML Certificates:

  • SP Certificate – NetScaler uses a certificate to sign the SAML Authentication Request. This NetScaler certificate must be copied to the iDP.
  • iDP Certificate – The iDP uses a certificate to sign the SAML Token. The iDP certificate must be installed on the NetScaler.

Shadow Accounts – The SAML iDP sends the user’s email address to the SAML SP. The SAML SP has its own authentication directory. The email address provided by the SAML iDP is matched with a user account at the SAML SP. Even though the user’s password is never seen by the SAML SP, you still need to create user accounts for each user at the SAML SP. These SAML SP user accounts can have fake passwords. The SAML SP user accounts are sometimes called shadow accounts.

SAML Token can be converted to other authentication methods – It is often desirable to convert a SAML Token to another form of authentication. For example, the SAML SP can use the Name ID from the SAML Token to perform a database lookup to retrieve the user’s authorizations..

  • No passwords – Not having access to passwords limits the authentication options at the SAML SP. Without a password, you can’t authenticate to Active Directory using LDAP or NTLM. It’s also not possible to request a Kerberos ticket for the user since NetScaler doesn’t have the user’s password.
  • SAML Token and SSON to Citrix XenDesktop – Kerberos can use User Certificates to authenticate to Active Directory without a password. Citrix Federated Authentication Service takes advantage of this by requesting a User Certificate for each SAML Name ID, and then using the User Certificate to login to XenDesktop using Kerberos. Each SAML Name ID needs a matching shadow account.

Kerberos

Kerberos authentication process (simplified):

  1. User tries to access a web page that is configured with Negotiate authentication.
  2. To authenticate to the web page, the user must provide a Kerberos Service ticket. The Kerberos Service ticket is requested from a Domain Controller.
    1. The Kerberos Service ticket is limited to the specific Service that the user is trying to access. In Kerberos parlance, the resource the user is trying to access is called the Service Principal Name (SPN). User asks a Domain Controller to give it a ticket for the SPN.
    2. Web Site SPNs are usually named something like HTTP/www.company.com. It looks like a URL, but actually it’s not. There’s only one slash, and there’s no colon. The text before the slash is the service type. The text after the slash is the DNS name of the server running the service that the user is trying to access.
  3. If the user has not already been authenticated with a Domain Controller, then the Domain Controller will prompt the user for username and password. The Domain Controller returns a Ticket Granting Ticket (TGT).
  4. The user presents the TGT to a Domain Controller and asks for a Service Ticket for the Target SPN. The Domain Controller returns a Service Ticket.
  5. The user presents the Service Ticket to the web page the user originally tried to access in step 1. The web page verifies the ticket and lets the user in.

The Service and the Domain Controller do not communicate directly with each other. Instead, the Kerberos Client talks to both of them to get and exchange Tickets.

Kerberos Delegation – The Kerberos Service Ticket only works with the Service listed in the Ticket. If that Service needs to talk to another Service on the user’s behalf, this is called Delegation. By default, Kerberos will not allow this Delegation. You can selectively enable Delegation by configuring Kerberos Constrained Delegation in Active Directory.

    • In Active Directory Users & Computers, edit the AD Computer Account for the First Service. On the Delegation tab, specify the Second Service. This allows the first Service to delegate user credentials to the Second Service. Delegation will not be allowed from the First Service to any other Service.

Kerberos Impersonation – If NetScaler has the user’s password (maybe from LDAP authentication), then NetScaler can simply use those credentials to request a Kerberos Service Ticket for the user from a Domain Controller. This is called Kerberos Impersonation.

Kerberos Constrained Delegation – If the NetScaler does not have the user’s password, then NetScaler uses its own AD service account to request a Kerberos Service Ticket for the back-end service. The service account then delegates the user’s account to the back-end service. In other words, this is Kerberos Constrained Delegation. On NetScaler, the service account is called a KCD Account.

  • The KCD Account is just a regular user account in AD.
  • Use setspn.exe to assign a Kerberos SPN to the user account. This action unlocks the Delegation tab in Active Directory Users & Computers.
  • Then use Active Directory Users & Computers to authorize Kerberos Delegation to back-end Services.

Negotiate – Kerberos and NTLM – Web Servers are configured with an authentication protocol called Negotiate (SPNEGO). This means Web Servers will prefer that users login using Kerberos Tickets. If the client machine is not able to provide a Kerberos ticket (usually because the client machine can’t communicate with a Domain Controller), then the Web Sever will instead try to do NTLM authentication.

  • NTLM is a challenge-based authentication method. NTLM sends a challenge to the client, and the client uses the user’s Active Directory password to encrypt the challenge. The web server then verifies the encrypted challenge with a Domain Controller.
  • Negotiate on client-side with NTLM Web Server fallback – NetScalers can use Negotiate authentication protocol on the client side (AAA or NetScaler Gateway). Negotiate will prefer Kerberos tickets. If Kerberos tickets are not available, then Negotiate can use NTLM as a fallback mechanism. In the NTLM scenario, NetScaler can be configured to connect to a domain-joined web server for the NTLM challenge process. By using a separate web server for the NTLM challenge, there’s no need to join the NetScaler to the domain.

HTTP

URLs

HTTP URL format: e.g. https://www.corp.com:444/path/page.html?a=1&key=value

  • https:// = the scheme. Essentially, it’s the protocol the browser will use to access the web server. Either http (clear-text) or https (SSL/TLS).
  • www.corp.com = the hostname. It’s the DNS name that the browser will resolve to an IP address. The browser then connects to the IP address using the specified protocol.
  • :444 = port number. If not specified, then it defaults to port 80 or port 443, depending on the protocol. Specifying the port number lets you connect to a non-standard port number, but firewalls might not allow it.
  • /path/page.html = the path to the file that the Browser is requesting.
  • ?a=1&key=value = query parameters to the file. The query clause beings with a ? immediately following the file name. Multiple parameters (key=value pairs) are separated by &. These parameters are typically generated using HTML forms. Query parameters are method for the HTTP Client to upload a small amount of data to the Web Server. There can be many parameters, or just one parameter.

URLs must be safe encoded (Percent encoded), meaning special characters are replaced by numeric codes (e.g. # is replaced by %23). See https://en.m.wikipedia.org/wiki/Percent-encoding.

HOST Header – one of the HTTP headers inserted into the HTTP Request Packet is named Host. The value of this header is based on whatever hostname the user typed into the browser’s address bar. It’s the part of the URL after the scheme (http://), and before the port number (:81) or path (/).

  • Web Servers use the Host Header to determine which website content should be served. This allows a Web Server to host multiple websites on one IP address. In this configuration, if the Host Header is not set correctly, then you might not see the website content you were hoping for. If the Web Server requires the Host Header to be set to a particular DNS Name, and if you enter the IP address into your browser’s address bar, then you won’t see the website.
  • NetScaler Load Balancing Monitors do not include the Host Header by default. If the Web Server requires the Host Header, then you must modify the NetScaler Monitor configuration to specify the Host header.

HTTP Body – The HTTP Body is the content you are transporting across the network. The HTTP body of an HTTP Request or HTTP Response can contain data, or a file.

  • Data can be HTML Form Parameters, JSON object, XML document, etc. These are detailed below.
  • File can be HTML file, .iso file, .png file, etc.
  • HTTP Body vs HTML Body – HTTP Body and HTML Body are completely different. An HTTP Body can contain an entire HTML file, which has a HTML Header and HTML Body. Or an HTTP Body can contain non-HTML file or data. HTML is just one of the file types that an HTTP Body can transport.

Cookies

Client-side Data Storage – Web Servers sometimes need to store small pieces of data in a user’s web browser. The user’s browser is then required to send the data back to the web server with every HTTP Request.

Set-Cookie – Web Servers add a Set-Cookie header to the HTTP Response. This Response Header contains a list of Cookie Names and Cookie Values.

Cookies are linked to domains – The Web Browser stores the Cookies in a place that is associated with the DNS name (host name) that was used to access the web site. The next time the user submits an HTTP Request to that DNS name, all Cookies associated with that host name are sent in the HTTP Request using the Cookie Request header.

  • Notice that the two headers have different names. HTTP Response has a Set-Cookie header, while HTTP Request has a Cookie header.

Cookie security – Cookies from other domains (other DNS names, other web servers) are not sent. Cookies usually contain sensitive data (e.g. session IDs) and must not be sent to the wrong web server. Hackers will try to steal Cookies so they can impersonate the user.

Cookie lifetimes are either Session Cookies, or Persistent Cookies. Session Cookies are stored in the browser’s memory and are deleted when the browser is closed. Persistent Cookies are stored on disk and available the next time the browser is launched.

  • Expiration date/time – Persistent Cookies are sent from the Web Server with an expiration date/time. This can be an absolute time, or a relative time.

NetScaler Cookie for Load Balancing persistence – NetScaler can use a Cookie to maintain load balancing persistence. The name of the Cookie is configurable. The Cookie lifetime can be Session or Persistent.

Web Server Sessions

Web Server Sessions preserve user data for a period of time – When users log into a web site, or if the data entered by a user (e.g. shopping cart) needs to be preserved for a period of time, then a Web Server Session needs to be created for each user.

Web Server Sessions are longer than TCP Connections – Web Server Sessions live much longer than a single TCP Connection, so TCP Connections cannot delineate a session boundary.

Each HTTP Request is singular – There’s nothing built into HTTP to link one HTTP Request to another HTTP Request. Various fields in the HTTP Request can be used to simulate a Web Server Session, but technically, each HTTP Request is completely separate from other HTTP Requests, even if they are from the same user/client.

Server-side Session data, and Client-side Session ID – Web Server Sessions have two components – server-side session data, and some kind of client-side indicator so the web server can link multiple HTTP Requests to the same server-side session.

A Cookie stores the Session ID – On the client-side, a session identifier is usually stored in a Cookie. Every HTTP Request performed by the client includes the Cookie, so the web server can easily associate all of these HTTP Requests with a Server-side session.

Server-side data storage – On the server-side, session data can be stored in several places:

  • Memory of one web server – this method is the easiest, but requires load balancing persistence
  • Multiple web servers accessing a shared memory cache (e.g. memcached)
  • Shared Database – each load balanced web server can pull session data from the database. This is typically slower than memory caches

Load Balancing Persistence and Web Server Sessions – some web servers store session data on the same web server the user initially connected to. If the user connects to a different web server, then the old session data can’t be retrieved, thus causing a new session. When load balancing multiple identical web servers, to ensure the user always connects to the same web server that was initially chosen by the user, configure Persistence on the load balancer.

Persistence Methods – When the user first connects to a VIP, NetScaler uses its load balancing algorithm to select a web server. NetScaler then needs to store the chosen server’s identifier somewhere. Here are common storage methods:

  • Cookie – the chosen server’s identifier is saved on the client in a Cookie. The client includes the persistence Cookie in the next HTTP request, which lets NetScaler send the next HTTP Request to the same web server as before. Pros/Cons:
    • No memory consumption on NetScaler
    • Cookie can expire when the user’s browser is closed
    • Each client gets a different Cookie, even if multiple clients are behind a common proxy.
    • However, not all web clients support Cookies.
  • Source IP – NetScaler records the client’s Source IP into its memory along with the web server it chose using its load balancing algorithm. Pros/Cons:
    • Uses NetScaler Memory
    • If multiple clients are behind a proxy (or common outgoing NAT), then all of these clients go to the same web server. That’s because all of them appear to be coming from one IP address.
    • Works with all web clients.
  • Rule-based persistence – use a NetScaler Policy Expression to extract a portion of the HTTP Request and use that for persistence. Ultimately, it works the same as Source IP, but it helps for proxy scenarios if the proxy includes the Real Client IP in one of the HTTP Request Headers (e.g. X-Forwarded-For).
  • Server Identifier – the HTTP Response from a web server instructs the web client to append a Server ID to every URL request. The NetScaler can match the Server ID in the URL with the web server. XenMobile uses this method.

Authentication and Cookie Security – If a web site requires authentication, it would be annoying if the user had to login again with every HTTP Request. Instead, most authenticated web sites return a Cookie, and that Cookie is used to authorize subsequent HTTP Requests from the same user/client/browser.

  • WAF Cookie Protection – Since the Web Session Cookie essentially grants permission, security of this Cookie is paramount. NetScaler Web App Firewall has several protections for Cookies.

HTML Forms

Get Data from user – if the web site developer wants to collect any data from a user (e.g. Search term, account login, shopping cart item quantity, etc.), then the web developer creates HTML code that contains a <form> element.

Form fields – Inside the <form> element are one or more form fields that let users enter data (e.g. Name, Quantity), or let users select an option (drop-down box).

Submit button – The last field is usually a Submit button.

Field names – Each of the fields in the form has its own name.

Field values – When a user clicks Submit, typically JavaScript on the client side ensures that the data was entered correctly. This is more about convenience than security. If users enter letters into a zip code field, JavaScript can immediately prompt the user for the correct format.

GET and POST – The data is then submitted to the web server using one of two methods: GET or POST.

  • With GET, each of the field names and field values is put in the Query Parameters portion of the URL (e.g. ?field1=value1&field2=value2), which is after the path and file name.
  • With POST, the HTTP Request Method (HTTP Command) is set to POST, and the field names and field values are placed in the Body of the HTTP Request.
  • The POST method is typically more secure. Web Servers can log the entire GET Method, including query parameters. But POST parameters in the body are never logged.

Web server validates HTML form data – When the web server receives the HTML form data, the web server must validate the input. Do not rely on client-side Javascript to validate the data. Instead, all HTML form data must be inspected by the web server for SQL Injection, Cross-site Scripting, etc.

  • NetScaler Web App Firewall (WAF) can do this inspection before the form data reaches the web server.
  • WAF can also validate the form fields. For example, NetScaler WAF can ensure that only numeric characters can be entered in a zip code field.

Web App Firewalls for HTML Forms – HTML Forms are the most sensitive feature in any web application. Web Developers must write their web server code in a secure manner. Use features like NetScaler Web App Firewall to provide additional protection.

  • WAF for JSON, XML – Other forms of submitting data to web servers, like JSON objects, XML documents, etc. should also be inspected. NetScaler Web App Firewall can do this too.

AJAX

AJAX – Another method Browsers use to communicate with Web Servers is AJAX (Asynchronous JavaScript)

JavaScript AJAX – JavaScript on the client side can send HTTP Requests to Web Servers. Web Servers send back a response in JSON or XML format.

JSON is JavaScript Object Notation – JavaScript scripts can create objects (similar to hash tables) using JSON notation. Curly braces surround the object. Each element is “name”:”value” pair. The values can be more JSONs. And values can be an array of values, including more JSONs. JSONs can be embedded within other JSONs. JSONs are very familiar to any JavaScript developer.

JSON vs XML

  • JSON is smaller than XML. XML is marked up with human-readable tags, bloating the size. JSON contains data, curly braces, colons, quotes, and square brackets. That’s it. Very little of it is dedicated to markup so most of it is pure data.
  • Familiarity – Since JavaScript developers already know how to create JSONs, there’s nothing new to learn, unlike XML.

AJAX enables Single Page Applications (SPA) – JavaScript reads the contents of the response and uses it for any purpose. For example, JavaScript can use the data in the response to build a table in the webpage and display it to the user. This allows data on the page to change dynamically without requiring a full page reload.

REST API

Commercial systems have a programmatically-accessible API (Application Programming Interface) that allows programs to control a remote system. Some API commands retrieve information from the system. Other API commands invoke actions (e.g. create object) in the system.

Use HTTP to call API functions – Modern APIs can be activated using the HTTP Protocol. Create a specially-crafted HTTP Request and send it to an HTTP endpoint that is listening for API requests.

SOAP Protocol – Older HTTP-based APIs operate by exchanging XML documents. This is called SOAP (Simple Object Access Protocol). However, XML documents are difficult to program, and the XML tags consume bandwidth.

REST API – Another newer HTTP-based architecture is to use all of the HTTP Methods (GET, POST, PUT, DELETE), and exchange JSON documents. JSON is leaner than XML.

  • NetScaler Nitro API is a REST API.

REST is stateless. All information needed to invoke the API must be included in one HTTP Request.

Web Browsers typically only use GET and POST in their HTTP Requests. But there are other HTTP Methods like PUT and DELETE.

Programs other than browsers can use the HTTP protocol – Web Browsers are not the only programs that can make HTTP Requests. Every computer language has some mechanism for creating a HTTP Request and processing the HTTP Response. Non-browser Programs can use the HTTP Methods PUT and DELETE.

REST is HTTP.

  • A REST-capable client is any client that can send HTTP Requests and process HTTP Responses. Some languages/clients have REST-specific functions. Others have only lower level functions for creating raw HTTP Requests.
  • On Linux, use curl to send HTTP Requests to an HTTP-based API.
  • In PowerShell, use Invoke-RestMethod to send an HTTP Request to an HTTP-based API.
  • Inside a browser, use Postman or other REST plug-in to craft HTTP Requests and send them to an HTTP-based API.

To invoke an HTTP REST-based API:

  1. HTTP Request to login – Send an HTTP Request with user credentials. The exact login URL or session creation URL is in the API documentation.
  2. Session Cookie – The REST API server sends back a Session Cookie that can be used for authorization of subsequent REST/HTTP Requests. The REST Client saves the cookie, and adds it to every subsequent REST/HTTP Request.
  3. Read API Documentation – Use the API’s documentation to find the URLs and HTTP Methods (Commands) to invoke the API.
  4. Content-Type – Some REST API Requests require a specific Content-Type to be specified in the HTTP Request Header. Add it to the HTTP Request that you’re creating.
  5. JSON Object in Request – Most REST API Requests require a JSON object to be submitted in the HTTP Body. Use the language’s functions to craft a JSON object that contains the parameters that need to be sent to the API Call.
  6. URL Query portion – Some REST API Requests require parameters to be specified in the query portion of the URL.
  7. Send HTTP Request – Create the full HTTP Request with HTTP Method, URL, URL Parameters, Content-Type Header, Cookie Header, and JSON Body. Send it to the HTTP REST server endpoint.
  8. Process Response, including JSON – The REST API sends back a HTTP 200 success message with a JSON document. Or it sends back an error message, typically with error details in an attached JSON document.

NetScaler VPN Networking

IP Pools

SNIP vs IP Pool (Intranet IPs) – By default, when a NetScaler VPN Tunnel is established, a NetScaler SNIP is used as the Source IP for all traffic that leaves the NetScaler to internal Servers. The internal Servers then simply reply to the SNIP. Instead of all VPN Clients sharing a single SNIP, you can configure IP Pools (aka Intranet IPs), where each VPN Client gets its own IP address.

Use IP Pools if Servers initiate communication to clients – if servers initiate communication to VPN Clients (IP Phones, SCCM, etc.), then each VPN Client needs its own IP address. This won’t work if all VPN Clients are sharing a single SNIP.

Intranet IPs assignment – Intranet IPs can be assigned to the Gateway vServer, which applies to all users connected to that Gateway vServer. Or you can apply a pool of IPs to a AAA Group, which allows you to assign different IP Pools (IP subnets) to different AAA Groups.

IP Pools and Network Firewall – If different Pools for different AAA Groups, then a network firewall can control which destinations can be reached from each of those IP Pools.

Intranet IP Subnet can be brand new – the IP subnets chosen for VPN Clients can be brand new IP Subnets that the NetScaler is not connected to. NetScaler is a router, so there’s no requirement that the IP addresses assigned to the VPN Clients be on one of the NetScaler’s data (VIP/SNIP) interfaces.

Reply traffic from Servers to Intranet IPs – If the Intranet IP Pool is a new IP Subnet, then on the internal network (core router), create a static route with the IP Pool as destination, and a NetScaler SNIP as Next Hop. Any SNIP on the NetScaler can reach any VPN Client IP address.

IP Spillover – if there are no Intranet IPs (IP Pool) available, then a VPN Client can be configured to do one of the following: use the SNIP, or transfer an existing session’s IP. This means that a single user can only have a single Intranet IP from a single client machine.

Split Tunnel

Split Tunnel – by default, all traffic from the VPN Client is sent across the VPN Tunnel. For high security environments, this is usually what you want, so the datacenter security devices can inspect the client traffic. Alternatively, Split Tunnel lets you choose which traffic goes across the tunnel, while all other client traffic goes out the client’s normal network connection (directly to the Internet).

Split Tunnel is enabled in a Session Policy/Profile – the Session Policy/Profile can be bound to the Gateway vServer, which affects all VPN users, or it can be bound to a AAA Group.

Intranet Applications define traffic that goes across the Tunnel – If Split Tunnel is enabled, then you must inform the VPN Client which traffic goes across the Tunnel, and which traffic stays local. Intranet Applications define the subnets and port numbers that go across the Tunnel. The Intranet Applications configuration is downloaded to the VPN Client when the Tunnel is established.

Intranet Applications – Route Summarization – If Split Tunnel is enabled, a typical configuration is to use a summarized address for the Intranet Applications. Ask your network team for the smallest number of network prefixes that matches all internal IP addresses. For example, every private IP address (RFC 1918) can be summarized by three route prefixes. The summarized Intranet Applications can then be assigned to all VPN Clients. Most networking training guides explain route summarization in detail.

Intranet Applications – Specific Destinations – Alternatively, you can define an Intranet Application for every single destination Server IP and Port. Then bind different “specific” Intranet Applications to different users (AAA Groups). Note: this option obviously requires more administrative effort.

Split DNS – If Split Tunnel is enabled, then Split DNS can be set to Remote, Local, or Both. Local means use the DNS servers configured on the Client. Remote means use the DNS Servers defined on the NetScaler. Both will check both sets of DNS Servers.

VPN Authorization

There are three methods of controlling access to internal Servers across the VPN Tunnel – Authorization Policies, Network Firewall (usually with Intranet IPs), and Intranet Applications (Split Tunnel).

Authorization Policies control access no matter how the VPN Tunnel is established. These Policies use NetScaler Policy Expressions to select specific destinations and either Allow or Deny. In NetScaler 11.1 and older, Authorization Policies use Classic Policy Expressions only, which has a limited syntax. In NetScaler 12 and newer, Authorization Policies can use Default Syntax Policy Expressions, allowing matching of traffic based on a much broader range of conditions.

Intranet Applications – If Split Tunnel is enabled, then Intranet Applications can be used to limit which traffic goes across the Tunnel. If the Intranet Applications are “specific”, then they essentially perform the same role as Authorization Policies. If the Intranet Applications are “summarized”, then you typically combine them with Authorization Policies.

Network firewall (IP Pools) – If Intranet IPs (IP Pools) are defined, then a network firewall can control access to destinations from the VPN Client IPs. If Intranet IPs are not defined, then the firewall rules apply to the SNIP, which means every VPN Client has the same firewall rules.

VPN Tunnel Summary

In Summary, to send traffic across a VPN Tunnel to internal servers, the following must happen:

  1. If Split Tunnel is enabled, then Intranet Applications identify traffic that goes over the VPN tunnel. Based on Destination IP/Port.
  2. Authorization Policies define what traffic is allowed to exit the VPN Tunnel to the internal network.
  3. Static Routes for internal subnets – to send traffic to a server, the NetScaler needs a route to the destination IP. For VPN, NetScaler is usually connected to both DMZ and Internal, with the default route (default gateway) on the DMZ. To reach remote internal subnets, you need static routes.
  4. Network Firewall must allow the traffic from the VPN Client IP – either SNIP, or Intranet IP (IP Pool).
  5. Reply traffic – If the VPN Client is assigned an IP address (Intranet IPs aka IP Pool), then server reply traffic needs to route to a NetScaler SNIP. On the internal network, create a default route with IP Pool as destination, and a NetScaler SNIP as Next Hop.

PXE

Network Boot and PXE

Network Boot – Network Boot allows machines to download a bootstrap from a TFTP server while the machine is still in BIOS boot mode. No Operating System needed on the local hard drives.

PXE (Pre-boot Execution Environment) – PXE is a mechanism for Network Boot machines to discover the location of the bootstrap file. PXE is based on the DHCP protocol.

  1. Get IP from DHCP – Before a machine can perform a Network Boot, it needs an IP address, which is provided by DHCP.
  2. Discover TFTP Server address – Then the machine needs to discover the IP address of the TFTP Server, and the name of the bootstrap file that should be downloaded from the TFTP Server. PXE uses DHCP Protocol to perform this discovery.

Network Boot without PXE – You can also Network Boot without PXE by booting from an ISO file or local hard drive that has just enough code on it to get the rest of the bootstrap from the TFTP server. DHCP is still usually used to get an IP address, but the IP address of the TFTP server is usually burned into this locally accessible code.

NICs and PXE – Network cards (NICs) need Network Boot capability built into their BIOS. Almost every NIC, including virtual machine NICs, has this capability. A notable exception is Hyper-V Synthetic NICs; Hyper-V Legacy NIC can Network Boot, but Hyper-V Synthetic NICs cannot.

PXE works as follows:

  1. Boot from network – A machine’s BIOS is configured to boot from the Network.
  2. DHCP Request to get IP – The NIC performs a DHCP Request to get an IP address.
  3. PXE Request to get TFTP info – The NIC performs a PXE Request to get the TFTP IP address and file name.
  4. Download from TFTP – The NIC downloads the bootstrap file from the TFTP server and runs it.
  5. Run the bootstrap – The bootstrap file usually downloads additional files from a server machine (e.g. Citrix Provisioning Server) and runs them.

PXE vs DHCP

DHCP Options 66 and 67 – During step 2, if the NIC receives DHCP Options 66 and 67, which contain the TFTP Server’s IP address and file name, then there’s no need to perform step 3.

PXE Request to UDP 4011 – During step 3, the NIC performs another DHCP Request, but this time on port UDP 4011. A PXE Service listening on UDP 4011 responds with DHCP Options 66/67.

DHCP and PXE are different port numbers – A PXE Service can run on the same server that is running a DHCP Service because they listen on different port numbers. But typically, PXE Service and DHCP Service are on different servers.

PXE = DHCP Protocol – PXE Service uses the DHCP Protocol, which means the NIC performs a Layer 2 Broadcast, which does not cross a router. Many routers do not forward UDP 4011 to a remote PXE Service.

PXE Request does not cross routers – Thus Step 3 usuallly only works in the Network Boot client and the PXE Service are on the same subnet. If they are on different subnets, then use DHCP Options 66/67 instead.

TFTP Redundancy

DHCP Option 66 can only point to a single TFTP Server IP Address. You usually want redundancy.

NetScaler Load Balancing of TFTP – Use NetScaler to load balance two or more TFTP Servers and configure DHCP Option 66 to point to the NetScaler Load Balancing VIP.

DNS Round-Robin – Option 66 can point to a DNS Round Robin-enabled DNS name, where the single DNS name resolves to both TFTP Servers’ IP addresses. This assumes the DHCP Client receives DNS Server information from the DHCP Server.

Separate Option 66 configured on each DHCP Server – If you have multiple DHCP Servers, each DHCP Server can send back a different Option 66 TFTP Server IP address.

Network Boot Clients and PXE on same subnet – If Network Boot clients and PXE Services are on the same subnet, then each PXE Service can send back a different TFTP Server IP address. Either PXE Service can respond to the PXE Request, so if one is down, the other will respond.

Citrix Provisioning Services and PXE

How Citrix Provisioning Services (PvS) uses PXE:

  • TFTP Server is installed on each PvS server.
  • PXE Service can be enabled on each PvS server.
  • PvS can create a Boot ISO that has the TFTP Server IP addresses built into them.
    • Note: The Boot ISO uses a different TFTP Service than normal PXE. Normal TFTP is UDP 69, while the Boot ISO connects a TFTP service called Two-stage Boot TFTP Service that is listening on UDP 6969.
  • PvS can burn boot code into a hard disk attached to a Network Boot machine. This works the same as the Boot ISO method.
  • A DHCP Service (typically Microsoft) can be installed on the PvS servers. DHCP Service and PXE Service are two different things.

NetScaler GSLB

GLSB Overview

GSLB is only useful if a single DNS name can resolve to multiple IP addresses. If this is not the case, then it would be easier to just leave the single DNS name / single IP on regular DNS.

Limitations of DNS Servers when a single DNS name resolves to multiple IP addresses:

  • The DNS Server doesn’t care if the IP address is reachable or not. There’s no monitoring.
  • The DNS Server doesn’t know which IP address is closest to the user. There’s no proximity load balancing.
  • The DNS Server can’t do site persistence, so you could get a different IP address every time you perform the DNS Query.

NetScaler GSLB is a DNS Technology that addresses these limitations. When you enable NetScaler GSLB, you enable your NetScaler appliances to resolve DNS Names. In other words, NetScaler appliances are essentially DNS Servers.

DNS names must be delegated to NetScaler DNS listeners. There are a few methods of doing this delegation:

  • In the existing DNS zone, delegate specific DNS names to NetScaler DNS. Each DNS name needs a separate delegation.
  • In the existing DNS zone, delegate an entire sub-zone (e.g. gslb.company.com) to NetScaler DNS. Then create CNAMEs for each DNS name that are alias’d to an equivalent DNS name in the sub-zone. For example: www.company.com is CNAME’d to www.gslb.company.com. Since the gslb.company.com sub-zone is delegated to NetScaler DNS, NetScaler will resolve this DNS name.
  • Move the entire existing DNS zone to NetScaler. Note: NetScaler was never designed as a full-fledged DNS Service, so you might find limitations when choosing this option.

For redundancy, configure multiple NetScaler appliances/pairs for DNS delegation – At least two NetScaler appliances/pairs should be configured for DNS services. Delegate DNS names or subzones to two different NetScaler appliances/pairs.

  • GSLB Configuration must be identical – Since it’s not possible to control which NetScaler appliance/pair resolves the DNS name, all NetScaler appliance/pairs must have identical GSLB configuration, so the DNS responses are always the same no matter which NetScaler appliance/pair resolves the DNS name.
  • GSLB Sync – One NetScaler appliance/pair can replicate its GSLB configuration to another NetScaler appliance/pair. The sync communication protocol is SSH.
  • GSLB Metric Exchange Protocol (MEP) – multiple GSLB-enabled appliances/pairs are configured to communicate with each other using a proprietary Metric Exchange Protocol. This communication is used for several purposes: Dynamic Proximity load balancing, Monitoring, and Persistence. These are detailed below.

Active/Passive – NetScaler GSLB can do active/passive, where the Active IP is given out if it is UP, and the Passive IP is given out if the Active IP is down. NetScaler needs to monitor the Active IP to verify it is reachable.

Active/Active – NetScaler GSLB can do active/active, where it load balances users across multiple Active IPs. A common load balancing algorithm is based on the client’s proximity to the Active IP; give out the Active IP that is closest to the user.

  • Active/Active can be combined with Passive so that if all Active IPs are down, then give out the Passive IP.

Proximity Load Balancing Methods – there are two Proximity Methods:

  • Static Proximity uses a location database stored on the appliance. The Source IP of the DNS Query is looked up in the location database to determine coordinates. The Active IPs are also looked up in the location database to determine coordinates. The coordinates are compared to determine which Active IP is closer to the Source IP. NetScaler has a built-in static proximity location database that you can use.
  • Dynamic Proximity asks each NetScaler (through MEP) to ping the Source IP of the DNS Query to determine which ping is fastest.

Recursive DNS and Client IP – DNS Clients use Recursive DNS Servers to resolve DNS names. The Source IP of a DNS Query sent to NetScaler is the Recursive DNS Server’s IP Address, and not the actual DNS Client’s IP Address. Thus all proximity calculations are based on the IP address of the Recursive DNS Server and not the actual client. The Recursive DNS Servers are sometimes called Local DNS Servers (LDNS).

  • ECS (EDNS-Client-Subnet) – Recursive DNS Servers can use the EDNS0 field on a DNS Request to include the client’s subnet. GSLB can then perform proximity calculations against ECS instead of against the Recursive DNS Server’s IP Address, thus improving accuracy. ECS functionality for GSLB is available in NetScaler 11.1 and newer.

GSLB Monitoring – GSLB has two methods of determining if an Active IP is up or not:

  • MEP Monitoring – If the Active IP is a VIP on one of the NetScalers connected by MEP, then use MEP to ask the NetScaler for the UP/DOWN status of the VIP.
  • Use regular Load Balancing Monitors – bind Load Balancing Monitors to the GSLB Service (Active IP). Monitors bound to a GSLB Service override MEP monitoring.

Monitoring of Internet circuits – If your Active IPs are Public IPs, then you don’t want to give out a Public IP if the IP is not reachable because the Internet circuit is down. There are a couple methods of monitoring the accessibility of an Internet circuit:

  • Route MEP through the Internet – If the remote site’s Internet goes down, then MEP goes down, and the Active IPs in that remote site go down.
  • Monitor a site-specific public IP address – Attach a monitor to the GSLB Service that monitors a public IP in the remote site. You probably have to specify the public IP inside the monitor configuration.
  • Monitor local Internet? – If the appliance’s local Internet goes down, then it’s probably not a problem, because DNS Requests/Queries aren’t making it to this appliance anyways. It’s more important for the other NetScalers to determine Internet accessibility in this datacenter.

GSLB DNS TTL – TTLs (time-to-live) for GSLB DNS Responses is usually 5 seconds. Thus every 5 seconds, a DNS Client needs to resolve the GSLB DNS Name again. This allows for quick changes if an Active IP address is no longer reachable.

  • Browsers have their own DNS cache that ignores the DNS TTL. For example, Internet Explorer tends to cache DNS responses for 30 minutes. In a GSLB failover scenario (Active IP is down), then the users will have to close all IE windows to clear the browser’s DNS cache. See Microsoft 263558 How Internet Explorer uses the cache for DNS host entries.

Site Persistence – because of the low TTL, it’s quite possible that a new DNS Query can return a different IP than the initial DNS Query. Some applications require you to always get the same IP address for a period of time. This is similar to Load Balancing Persistence needed for Web Sessions.

GSLB Site Persistence – GSLB has three methods of Site Persistence:

  • Source IP – records DNS Query’s Source IP to GSLB Active IP in the NetScaler’s memory. If Source IP is the Recursive DNS Server, then many clients might end up with the same Active IP response. Accordingly, Source IP persistence is usually not recommended unless Cookie Persistence is not an option. Note: ECS in NetScaler 11.1 can be used for Source IP persistence.
  • Cookie Persistence – For Active IPs that are HTTP VIPs on one of the NetScaler appliance/pairs connected by MEP, the first HTTP Response from the NetScaler VIP will include a cookie indicating which GSLB Site the Response came from. The HTTP Client will include this Site Cookie in the next HTTP Request to the VIP. If the DNS TTL expires (5 seconds), then DNS Query must be performed again. If the client gets a different IP address in a different GSLB Site, then the Site Cookie sent to that NetScaler will be for a different GSLB Site than the VIP it’s now trying to access. NetScaler has two options for getting the HTTP Request from the wrong VIP to the correct VIP:
    • Redirect – redirect the user to a different DNS name that is site-specific (a Site Prefix is added to the original DNS name). This requires the certificate to match the original GSLB-enabled DNS name, plus the new site-specific DNS name.
    • Proxy – proxy the HTTP Request to the correct GSLB Site. This means that NetScaler in the wrong GSLB Site must be able to forward the HTTP Request to the VIP in the correct GSLB Site.

GSLB is not in the data path – GSLB is nothing more than DNS: it resolves DNS names to IP addresses. Once the Client has the IP address, it creates a TCP connection directly to that IP address, and GSLB is no longer involved.

  • GSLB Site Persistence cookies – since GSLB = DNS, and not HTTP, it’s not possible to include an HTTP Cookie in the DNS Response. The HTTP Cookie isn’t added until the the HTTP Client makes its first request to an HTTP/SSL VIP on one of the NetScaler appliances/pairs that is participating in GSLB.

Other GLSB Use Cases

GSLB and Multiple Internet Circuits – a common use case for GSLB is if you have multiple Internet circuits connected to a single datacenter and each Internet circuit has a different public IP subnet. In this scenario, you have one DNS name, and multiple public IP addresses, which is exactly the scenario that GSLB is designed for.

  • Local Internet Circuit Monitoring – GSLB Services need a monitor that can determine if the local Internet is up or not. You don’t want to give out a Public IP on a particular Internet circuit if the local Internet circuit is down. You typically configure the GSLB Monitor to ping a circuit-specific IP address (router).

Internal GSLB – GSLB can also be used internally to give out internal IPs to DNS queries. However, there are a couple differences when compared to public IPs:

  • Private IPs are not in the location database. If doing static proximity load balancing, you must manually add each internal subnet to the location database.
  • Internal IPs are not affected by Internet outage. Thus GSLB monitoring for internal Active IPs is usually configured differently than GSLB monitoring for public Active IPs.

Mix internal and public GSLB on the same appliance? – Since internal and public have different GSLB monitoring configurations, you need separate GSLB configurations for internal and public. However, you can’t assign the same DNS name to two different GSLB configurations. Here are three options:

  • Use different DNS names for public and internal.
  • Don’t use the same NetScaler for both public and internal.
  • Configure internal DNS to CNAME to a different internal DNS name.

NetScaler Essential Concepts: Part 1 – Request-Response, HTTP Basics, and Networking

Last Modified: Jun 25, 2017 @ 2:23 pm

Navigation

Introduction

Many NetScalers are managed by server admins and/or security people that do not have extensive networking experience. This topic will introduce you to important networking concepts to aid you in successful configuration of NetScalers. Most of the following concepts apply to all networks, but this topic will take a NetScaler perspective.

The content is intended to be introductory only. Search Google for more detail on each topic.

Request-Response

Request-Response Overview

Request/Response – fundamentally, a Client sends a Request to a Server. The Server processes the Request, and sends back a successful Response, or sends back an Error. Request-Response describes almost all client-server networking.

Clients send Requests – For NetScaler, Clients are usually web browsers. But it can be any client-side program that requests something from a server.

Servers Respond to Requests – For NetScaler, Servers are usually web servers. These machines receive HTTP requests from clients, perform the HTTP Method (command) contained in the request, and send back the response.

What’s in a Request?

Request are sent to Web Servers using the HTTP protocol – Web Browsers use the HTTP protocol to send Requests to Web Servers. Web Servers use the HTTP protocol to send Responses back to Web Browsers.

Protocol – A protocol defines a vocabulary for how machines communicate with each other. Since web browsers and web servers use the same protocol, they can understand each other.

HTTP is an OSI Layer 7 protocol – HTTP is defined by the OSI Model as a Layer 7, or application layer, protocol. Layer 7 protocols run on top of (encapsulated in) other lower layer protocols, as detailed later.

HTTP Request Commands – HTTP Requests contain commands for the web server. The web server is intended to carry out the requested command. In the HTTP Protocol, Request Commands are also known as Request Methods.

HTTP GET Method – The most common Command in an HTTP Request is GET. This Command asks the web server to send back a file. In other words, web servers are essentially nothing more than file servers.

  • Additional HTTP Request Commands/Methods beyond GET will be detailed in Part 2.

HTTP Path – attached to the GET Command is the path to the requested file. Web servers can host thousands of files. The client needs some method of requesting a particular file. In HTTP, the format is something like /directory/directory/file.html. On NetScaler, you can access the HTTP path in a policy expression by entering HTTP.REQ.URL.PATH.

  • More info on URLs will be provided later in Part 2

Addresses Overview

Unique addresses – Every machine (including clients and servers) has at least one address. Addresses are unique across the whole Internet; only one machine can own a particular address. If you have two machines with the same address, which machine receives the Request or Response?

Requests are sent to a Destination Address – when the client sends a request to a web server, it sends it to the server’s address. This is similar to email: you enter the address of the recipient. The server’s address is put in the Destination Address field of the Request Packet.

  • Requests are placed on a Network, which gets it to the destination – The client puts the Request Packet on the network. The network uses the Destination Address in the packet to get the packet to the web server. This process is detailed later.

Web Servers reply to the Source Address – when the Request Packet is put on the network, the client machine inserts its own address as the Source Address. The web server receives the Request and performs its processing. The web server then needs to send the Response back to the Client. It extracts the Source Address from the Request Packet, and puts that in the Destination Address of the Response Packet. If the original Source Address is wrong or missing, then the response will never make it back to the client.

  • Sometimes, Requests get to Servers successfully, but Responses fail to come back – If you don’t receive a Response to your Request, then either the Request didn’t make it to the Server, or the Response never made it from the Server back to the Client. The key point is that there are two communication paths: the first is from Client to Server, and the second is from Server to Client. Either one of those paths could fail.

Numeric-based addresses – All network addresses are ultimately numeric, because that’s the language that machines understand. Network packets contain Source and Destination addresses in numeric form. Routers and other networking equipment read the numeric addresses, perform a table lookup to find the next hop to reach the destination, and quickly place the packet on the next interface to reach the destination. This is much quicker if addresses are numbers instead of words.

  • IP Addresses are one type of address – Different OSI layers have different addresses. Layer 3 IP Addresses are how the network (Internet) gets the packet from the Source to the Destination and back again. Clients and Servers have unique IP Addresses. Layer 3 networking will be detailed later.
  • IP Address format – Each IP address is four numbers separated by three periods (e.g. 216.58.194.132). Each of the four numbers must be in the range from 0 to 255. Most network training guides cover IP addressing in excruciating detail so I won’t repeat it here.

Human-readable addresses – When a human enters the destination address of a Web Server, humans much prefer to enter words instead of numbers. So there needs to be a method to convert word-based addresses into numeric-based addresses. This method is called DNS (Domain Name System), which will be detailed later.

Web Servers and File Transfer

Web Servers are File Servers – essentially, Web Servers are not much more than file servers. A Web Client requests the Web Server to send it a file.

Web Clients use the HTTP Protocol to download files from a Web Server.

Web Clients are responsible for doing something meaningful with the files downloaded from Web Servers – The files downloaded from a Web Server can be: displayed to the user, processed by a program, or stored.

  • Web Browsers – Web Browsers are a type of Web Client that usually want to display the files that are downloaded from Web Servers. If the file contains HTML tags, then the Browser will render the HTML tags and display them to the user.
    • Web Browsers are sometimes called User Agents.
  • API Web Clients – Web Clients can use an HTTP-based API to download data files from a web server. These data files are typcially processed by a client-side script or program, and aren’t displayed directly to the user.
  • Downloaders – some Web Clients are simply Downloaders, meaning all they do is use HTTP to download files and store them on the hard drive. Later, the user can do something with those downloaded files.

Web Server and Web Client Scripting

Web Server Script Processing – web servers can do more than just file serving: they can also run server-side scripts that dynamically modify the files before the files are downloaded to the web client.

Web Server Script Languages – different web server programs support different server-side script languages. These server-side script languages include: Java, ASP.NET, Ruby, PHP, Node.js, etc.

  • Web Server Data – Server-side scripts use data to dynamically modify the HTML pages. The data can be retrieved from a database. And the data can be provided by the Web Client.

Web Browser Scripting – all Web Browsers use Javascript for client-side scripting. Client-side scripts (JavaScript) add animations and other dynamic features to web pages.

  • Web Browser Plug-ins – Additional client-side scripting languages can be added to web browsers by installing plug-ins, like Flash and Java. But today these plug-ins are becoming more rare because JavaScript can do almost everything that Flash and Java can do.

Other Client-side programs – Non-browser Client-side programs can use any language (including PowerShell) that supports sending HTTP Requests and processing the HTTP Responses.

Server Services and Server Port Numbers

Web Server Software – there are many web server programs like IIS, Apache, NGINX, Ember (Node.js), WebLogic, etc. Some are built into the operating system (e.g. IIS is built into Windows Server), others must be downloaded and installed.

Web Server Software runs as a Service – The Web Server Software installation process creates a Service (or UNIX/Linux Daemon) that launches automatically every time the Server reboots. Services can be stopped and restarted. Server admins should be familiar with Server Services.

Servers can run multiple Services at the same time – A single Server can run many Services at the same time: an Email Server Service, a FTP Server Service, a SSH Server Service, a Web Server Service, etc. There needs to be some way for the Client to tell the Server that the Request is to be sent to the Web Server Service and not to the SSH Service.

Services listen on a Port Number – when the Web Server Service starts, it begins listening for requests on a particular Port Number (typically port 80 for unencrypted HTTP traffic, and port 443 for encrypted SSL/TLS traffic). Other Services listen on different port numbers. It’s not possible for two Services to listen on the same port number.

Clients send packets to a Destination Port Number – When a Client wants to send a HTTP Request to a Web Server Service, it needs to add the Destination Port Number to the packet. If you open a browser and type a DNS name into the browser’s address bar, by default, the browser will send the packet to Port 80, which is usually the port number that Web Server Services are listening on.

Client Programs and Client Ports

Multiple Client Programs – multiple programs can be running at the same time on a single Client; for example: Outlook, Internet Explorer, Chrome, Slack, etc. When the Response is sent from the Server back to the Client, which client-side program should receive the Response?

Client Ephemeral Ports – whenever a client program sends a request to a Server, the operating system assigns a random port number between 1024 and 65535 to the client process. The range of ephemeral port numbers varies for different client operating systems.

Servers send Response to Client Emphemeral Port – The Server sends the Response to the Client’s Ephemeral Port, also known as the Source Port. This is how a client NIC matches the Response with the client program that initiated it.

Source Port in Network Packet – In order for a Server to know the Client’s Ephemeral Port, the Source Port number must be included in the Request packet.

Each Client Request can use a different Ephemeral Port – A client program can send multiple requests to multiple server machines, and each of these outstanding requests usually has a unique Client Ephemeral port.

Summary of the Network Packet Fields discussed so far – In order for Packets to reach the Server Service and return to the Client Program, every network packet must contains the following fields:

  • Destination Address – the Server’s IP address
  • Destination Port – port 80 for Web Server Services
  • Source Address – the Client’s IP address
  • Source Port – the ephemeral port assigned by the operating system

Sessions

Sessions Overview

Sessions and the OSI network model – A session is a longer-lived connection between two endpoints. Each layer of the OSI network model has a different conception of sessions. (Note: the OSI model is detailed in every network training book/video/class).

  • Layer 4 – TCP Connection
  • Layer 6 – SSL/TLS Session
  • Layer 7 – HTTP Session
  • Web Server Session – doesn’t map to the OSI Model. Think “shopping carts”

Higher layer sessions require lower layer sessions – Sessions at higher layers require Sessions (or Connections) at lower layers to be established first. For example, HTTP Requests can’t be sent unless a TCP Connection is established first.

A single Server Service can handle Requests from multiple Clients at the same time – When a Client connects to a Service’s Port Number, the Server Service creates a session for the the Client IP and Client Port Number. Each combination of Client IP and Client Port Number is a different session.

Session duration – Sessions at higher layers can live beyond a single lower layer session. For example, a Web Server Session might exist for days, while each HTTP Request might only live for a few seconds.

Session multiplexing – A single lower layer session might be used for multiple higher level sessions. For example, NetScaler appliances multiplex multiple HTTP Requests onto a single TCP Connection.

Network Sessions

Application Data can exceed the maximum size of a Network Packet – Requests and Responses (especially responses) can be too big for a single packet. Thus the Request and/or Response must be broken up into multiple packets. These multiple packets must then be reassembled after they arrive at the destination.

Packets can arrive out of Order – when a Request or Response is broken into multiple packets, the destination needs to reassemble the packets in the correct order. Each packet contains a Sequence Number. The first packet might have Sequence Number 1, while the second packet might have Sequence Number 2, etc. These Sequence Numbers are used to reassemble the packet in the correct order.

Packet Loss – Some packets might not make it to the destination machine. TCP uses the Sequence Numbers to determine if it received all of the packets from the source machine. If one of the sequences is missing, then TCP asks the source to resend the packet. Packet resend is also known as retransmission.

TCP and UDP Overview – there are two Layer 4 Session protocols – TCP, and UDP. TCP handles many of the Network Session services (reassembly, retransmission, etc.) mentioned above. UDP does not do any of these services, and instead requires higher layer protocols to handle them.

TCP Port Numbers and UDP Port Numbers are different – Each Layer 4 protocol has its own set of port numbers. TCP port numbers are different from UDP port numbers. A Server Service listening on TCP 80 does not mean it is listening on UDP 80. When talking about port numbers, you must indicate if the port number is TCP or UDP, especially when asking to firewall teams to open ports. Most of the common ports are TCP, but some (e.g. Voice) are UDP.

TCP Protocol (Layer 4)

TCP Three-way handshake – Before two machines can communicate using TCP, a three-way handshake must be performed:

  1. The TCP Client initiates the TCP connection by sending a TCP SYN packet (connection request) to the Server Service Port Number.
  2. The Server creates a TCP Session in its memory, and sends a SYN+ACK packet (acknowledgement) back to the TCP client.
  3. The TCP Client receives the SYN+ACK packet and then sends an ACK back to the TCP Server, which finishes the establishment of the TCP connection. HTTP Requests and Responses can now be sent across this established TCP connection.

TCP Connections are established between Port Numbers – The TCP Connection is established between the Client’s TCP Port (ephemeral port), and the Server’s TCP Port (e.g. port 80 for web servers).

Multiple Clients to one Server Port – A single Server TCP Port can have many TCP Connections with many clients. Each combination of Client Port/Client IP with the Server Port is considered a separate TCP Connection. You can view these TCP Connections by running netstat on the server.

  • Netstat shows Layer 4 onlynetstat command shows TCP connections (Layer 4) only, not HTTP Requests (Layer 7).

HTTP requires a TCP Connection to be established first – When an HTTP Client wants to send an HTTP Request to a web server, a TCP Connection must be established first. The Client and the Server do the three-way TCP handshake on TCP Port 80. Then the HTTP Request and HTTP Response is sent over this TCP connection. HTTP is a Layer 7 protocol, while TCP is a Layer 4 protocol. Higher layer protocols run on top of lower layer protocols. It is impossible to send a Layer 7 Request (HTTP Request) without first establishing a Layer 4 session/connection.

Use Telnet to verify that a Service is listening on a TCP Port number – when you telnet to a server machine on a particular port number, you are essentially completing the three-way TCP handshake with a particular Server Service. This is an easy method to determine if a Server machine has a Service listening on a particular port number, and that you’re able to communicate with that port number.

UDP Protocol (Layer 4)

UDP is Sessionless – UDP is a much simpler protocol than TCP. For example, there’s no three-way handshake like TCP. Since there’s no handshake, there’s no UDP session.

No Sequence Numbers – UDP packets do not contain sequence numbers. If a packet is lost, UDP does not request a resend like TCP does. If packets arrive out of order, UDP cannot determine this, and cannot reassemble them in the correct order. If these features are desirable, then the application (Layer 7) needs to do these features instead of relying on UDP to do it.

Why UDP over TCP? – TCP session information is contained in every TCP packet, thus making every TCP packet (20 byte header) bigger than a UDP packet (8 byte header). Also, TCP retransmissions are not as fast or efficient as other methods of recovering from packet loss. For example, Citrix has recently reconfigured HDX/ICA so it can use UDP instead of TCP. They did this by essentially creating their own version of TCP sessions. Citrix’s version of network sessions over UDP is more efficient (smaller packets, quicker recovery) than TCP’s version.

  • Audio uses UDP – For audio traffic, there’s usually no point in resending lost packets. Getting rid of retransmissions makes UDP more efficient (less bandwidth, less latency) than TCP.

You can’t use Telnet to troubleshoot UDP – since there’s no three-way handshake in UDP, it’s impossible to use telnet to determine if a Server Service is listening on a UDP port or not. With UDP, the UDP Client machine sends a Request to the UDP Server. The UDP Server does not send any acknowledgment that it received the UDP Request. Thus all a UDP Client can do is wait for a response from the server. If the server doesn’t respond, it doesn’t necessarily mean that the server isn’t listening on the UDP port.

You can’t use netstat to see UDP sessions – since there’s no UDP session, if you run netstat on a machine, you won’t see any UDP sessions. To really see UDP traffic, use a packet capture program like Wireshark.

Application Sessions – web servers typically have their own application session mechanism. Application sessions usually extend beyond a single TCP session to encompass multiple TCP sessions, Web Server sessions are detailed in Part 2.

HTTP Basics

HTTP Protocol Overview

URLs – users enter a URL into a browser’s address bar. An example URL is https://en.wikipedia.org/wiki/URL

  • https:// or http:// – the first part of the URL specifies the Layer 7 protocol that the browser will use to connect to the web server.
  • en.wikipedia.org – the second part of the URL is the human-readable DNS name that translates to the web server’s IP address.
  • /wiki/URL – the remaining part of the URL is the Path and Query. The Path indicates the path to the file you want to download.

Why forward slashes in URLs? – Web Server programs were originally developed for UNIX and Linux. Thus they share some of the Linux characteristics. For example, file paths in HTTP requests use forward slash (/) instead of backslash (\).

  • Some URLs are case sensitive – Since UNIX/Linux is case sensitive, file paths in HTTP requests are sometimes case sensitive.

HTTP vs HTML – Web Browsers use the HTTP protocol to download HTML files from a web server. HTTP is the communication protocol to get a file. HTML defines how a web browser displays a web page (HTML file) to a user. There are many books and videos that explain HTML, but not many explain HTTP.

HTTP Packet

HTTP Request Command (Method) – at the top of every HTTP Request packet is the HTTP command. This command might be something like this: GET /Citrix/StoreWeb/login.aspx HTTP/1.1

HTTP Response Code – at the top of every HTTP Response is a code like this: HTTP/1.1 200 OK. Different codes mean success or error. Code 200 means success. You’ll need to memorize many of these codes.

Header and body – HTTP Packets are split into two sections: header, and body.

  • HTTP Headers – Below the Request Command (Method), are a series of Headers. Web Browsers insert Headers into requests. Web Servers insert Headers into responses. Request Headers and Response Headers are totally different. You’ll need to memorize most of these Headers.
  • HTTP Body – Below the Headers is the Body. Not every HTTP Packet has a Body. In a HTTP Response, the HTTP Body contains the actual downloaded file (e.g. HTML file). In an HTTP Request, the HTTP Body contains data (parameters) that is uploaded with the Request.

Raw HTTP packets – To view a raw HTTP packet, use your browser’s developer tools (F12 key), or use a proxy program like Fiddler. In a Browser’s Developer Tools, switch to the Network tab to see the HTTP Requests and HTTP Responses.

Multiple HTTP requests – A single webpage requires multiple HTTP requests. When an HTML file is processed by a Web Browser, the HTML file contains links to supporting files (CSS, JavaScript, images) that must be downloaded from the web server. Each of these file downloads is a separate HTTP Request.

  • HTTP and TCP Connections – Every HTTP Request requires a TCP Connection to be established first. Older Web Servers tear down the TCP Connection after every single HTTP Request. This means that if a web page needs 20 downloads, then 20 TCP Connections, including 20 three-way handshakes, are required. Newer Web Servers keep the TCP Connection established for a period of time, allowing each of the 20 HTTP Requests to be sent across the existing TCP Connection.

HTTP Redirects – one HTTP Response Code that you must understand is the HTTP Redirect. These HTTP Response packets have response code 301 or 302. The HTTP Response Header named Location identifies the new URL that the browser is expected to navigate to.

  • Redirect Traffic flow:
    1. User’s Web Browser sends an HTTP Request to a Web Server
    2. Web Server sends back an HTTP Response with a 301/302 response code and Location header.
    3. User’s Web Browser navigates to the URL contained in the Location header. Note that the Web Server tells the Web Browser where to go. But it’s the browser that actually goes there.
  • Redirect usage – Redirects are used extensively by web applications. Most web-based applications would not function without redirects. A common usage of Redirects is in authenticated websites where an unauthenticated user is redirected to a login page, and after login, the user is redirected back to the original webpage.
  • Not all Web Clients support HTTP Redirects – Web Browsers certainly can perform a redirect. However, other Web Clients (e.g. Citrix Receiver) do not follow Redirects.

Additional HTTP concepts will be detailed in Part 2.

Networking

Layer 2 (Ethernet) and Layer 3 (Routing) Networking

Subnet – all machines connected to a single “wire” are considered to be on the same subnet. Machines on the subnet can communicate directly with other machines on the same subnet.

Routers – If two machines are on different subnets, then those two machines can only communicate with each other by using an intermediary device that is connected to both subnets. This intermediary device is called a router. The router is connected to both subnets (wires) and can take packets from one subnet and put them on the other subnet.

Layer 2 – When machines on the same subnet want to communicate with each other, they use a Layer 2 protocol, like Ethernet.

Layer 3 – When machines on different subnets want to communicate with each other, they use a Layer 3 protocol, like IP (Internet Protocol).

Local IP address vs remote IP address

Local vs Remote – since different protocols are used for intra-subnet (Layer 2) and inter-subnet (Layer 3), the machines need to know which other machines on are on the local subnet, and which machines are on a remote subnet.

Subnet Mask – all machines have an IP address. All machines are configured with a subnet mask. The subnet mask defines which bits of the IP address are on the same subnet. For example, if a machine with address 10.1.0.1 wants to talk to a machine with address 10.1.0.2, and if the subnet mask is 255.255.0.0, when both addresses are compared to the subnet mask, the results are the same, and thus both machines are on the same subnet, and Ethernet is used. If the results are different, then the other machine is on a different subnet, and IP Routing is used. There is a considerable amount of training material on subnet masks so I won’t repeat that material here.

Wrong Subnet Mask – If either machine is configured with the wrong subnet mask, then one of the machines might think the other machine is on a different subnet, when actually it’s on the same subnet. Or one of the machines might think the other machine in on the same subnet, when actually it’s on a different subnet. Remember, same subnet communication uses a different communication protocol than between subnets. Thus it’s important that the subnet mask is configured correctly.

Layer 2 Ethernet communication

Every machine sees every packet – A characteristic of Layer 2 (Ethernet) is that every machine sees traffic from every other machine on the same subnet.

MAC addresses – When two machines on the same subnet talk to each other, they use a Layer 2 address. In Ethernet, this is called the MAC address. Every Ethernet NIC (network card) in every machine has a unique MAC address.

NICs Listen for their MAC address – The Ethernet packet put on the wire contains the MAC address of the destination machine. All machines on the same subnet see the packet. If the listening NIC has a MAC address that matches the packet, then the listening NIC processes the rest of the packet. If the listening NIC doesn’t have a matching MAC address, then the packet is ignored. You can override this ignoring of packets by turning on promiscuous mode, which is useful for packet capture programs (e.g. Wireshark).

Source MAC address – When a Ethernet packet reaches a machine, the machine needs to know where to send the reply. Thus both the destination MAC address, and the source MAC address, are included in the Ethernet packet.

Ethernet Packet Fields – In summary, a typical Ethernet packet contains the following fields:

  • Destination MAC address
  • Source MAC address
  • Destination IP address
  • Source IP address
  • Destination TCP/UDP port number
  • Source TCP/UDP port number

Other Layer 2 technologies – another common Layer 2 technology seen in datacenters is Fibre Channel for storage area networking (SAN). Fibre Channel has its own Layer 2 addresses called the World Wide Name (WWN). Fibre Channel does not use IP in Layer 3, and instead has its own Layer 3 protocol, and its own Layer 3 addresses (FCID).

ARP (Address Resolution Protocol)

Users enter IP Address, not MAC Address – when a user wants to talk to another machine, the user enters a DNS name, which is translated to an IP address. If the destination IP address is on a remote subnet, then the Layer 3 protocol IP Routing will get the packet to the destination. But if the destination IP address is on the same subnet as the source machine, then the destination IP address first needs to be converted to a MAC address. Machines use Address Resolution Protocol (ARP) to find the MAC address that’s associated with an IP address that’s on the same subnet.

  • Remember, machines use the Subnet Mask to determine if the destination is local or remote.

ARP Process – The source machine sends out an Ethernet broadcast with the ARP message “who has IP address 10.1.0.2”. Every machine on the same subnet sees the message. If one of the machines is configured with IP address 10.1.0.2, then that machine replies to the source machine, and includes its MAC address in the response. The source machine can now send a packet directly to the destination machine’s Ethernet MAC address.

ARP Cache – after the ARP protocol resolves an IP address to a MAC address, the MAC address is cached on the machine for a period of time (e.g. 30 seconds). If another IP packet needs to be sent to the same destination IP address, then there’s no need to perform ARP again, since the source machine already knows the destination machine’s MAC address. When the cache entry expires, then ARP needs to be performed again.

IP Conflict – Remember, a particular IP address can only be assigned to one machine. If two machines have the same IP address, then both machines will respond to the ARP request. Sometimes the ARP response will be one machine’s MAC address, and sometimes it will be the other machine’s MAC address. This behavior is typically logged as a “MAC move” or an “IP conflict”. Since only half the packets are reaching each machine, both machines will stop working.

Layer 3 on top of Layer 2

Routing to other subnets – When a machine wants to talk to a machine on a different subnet, the source machine needs to send the packet to a router. The router will then forward the packet to the destination machine on the other subnet.

Default gateway – Every client machine is configured with a default gateway, which is the IP address of a router on the same subnet as the client machine. The client machine assumes that the default gateway (router) can reach every other subnet.

  • On a NetScaler or UNIX/Linux device, the default route (default gateway) is shown as route 0.0.0.0/0.0.0.0.

Router’s MAC address – Since the router and the source machine are on the same Ethernet subnet, they use Ethernet MAC addresses to communicate. The source machine first ARP’s the router’s IP address to find the router’s MAC address. The source machine then puts the packet on the wire with the destination IP address and the router’s MAC address.

  • The Destination IP Address is the final destination’s (the web server’s) IP address, and not Router’s IP address. However, the MAC Address is the Router’s MAC address, and not the final destination’s MAC Address.
  • ARP across subnet boundaries – It’s not possible for a source machine to find the MAC address of a machine on a remote subnet. If you ping an IP address on a remote subnet, and if you look in the ARP cache, you might see the router’s MAC address instead of the destination machine’s MAC address. That’s because routers do not forward Ethernet broadcasts to other subnets.
  • Router must be on same subnet as client machine – since client machines use Ethernet, ARP, and MAC addresses to talk to routers, the router (default gateway) and the client machine must be on the same subnet. More specifically, the router must have an IP address on the same IP subnet as the client machine. When the client machine’s IP address and the router’s IP address are compared to the subnet mask, the results must match. You cannot configure a default gateway that is on a different subnet than the client machine.

Routing table lookup – When the router receives the packet on its NIC’s MAC address, it sees that the destination IP address is not one of the router’s IP addresses, so it looks in its memory (routing table) to determine what network interface it needs to put the packet on. The router has a list of which IP subnet is on which router interface. IP Subnets are defined by the address prefix and the subnet mask.

Router ARP’s the destination machine on other subnet – If the destination IP address is on one of the subnets/interfaces that the router is connected to, then the router will perform an ARP on that subnet/interface to get the destination machine’s MAC address.

Router puts the original packet on the destination interface, but with some changes:

  • The destination MAC address is changed to the destination machine’s MAC address instead of the router’s MAC address.
  • The source MAC address in the packet is now the router’s MAC address, thus making it easier for the destination machine to reply.
  • The IP Addresses in the packet do not change. Only the MAC addresses change.

There can only be one default route on a machine, which impacts multi-NIC machines – Some machines (e.g. NetScaler appliances) might be configured with multiple IP addresses on multiple subnets. Only one router can be specified as the default gateway (default route). This default gateway must be on one of the subnets that the client machine is connected to. See the NetScaler networking sections below for details on how to handle the limitation of only a single default route.

Multiple Routers and Routing Protocols

Router-to-router communication– When a router receives a packet that is destined to a remote IP subnet, the router might not be Layer 2 (Ethernet) connected to the remote IP subnet. In that case, the router needs to send the packet to another router. It does this by changing the destination MAC address of the packet to a different router’s MAC address. Both routers need to be connected to the same Ethernet subnet.

Routing Protocols – Routers communicate with each other to build a topology of the shortest path or quickest path to reach a destination. Most of the CCNA/CCNP/CCIE training material details how the routers perform this path selection.

Ethernet Switches

Ethernet Subnet = Single wire – All machines on the same Ethernet subnet share a single “wire”. Or at least that’s how it used to work.

Switch backplane – Today, each machine connects a cable to a port on a switch. The switch merges the switch ports into a shared backplane. The machines communicate with each other across the backplane instead of a single “wire”.

MAC address learning – The switch learns which MAC addresses are on which switch ports.

Switches switch known MAC addresses to only known switch ports – If the switch knows which switch port connects to the destination MAC address of an Ethernet packet, then the switch only puts the Ethernet packet on the one switch port. This means that Ethernet packets are no longer seen by every machine on the wire. This improves security by preventing network capture tools from seeing every packet on the Ethernet subnet.

Switches flood unknown MAC addresses to all switch ports – If the switch doesn’t know which switch port connects to a destination MAC address, then the switch floods the packet to every switch port on the subnet. If one of the switch ports replies, then the switch learns the MAC address on that switch port.

Switches flood broadcast packets – The switch also floods broadcast packets to every switch port in the Ethernet subnet.

Switches and VLANs

VLANs – A single Ethernet Switch can have different switch ports in different Ethernet Subnets. Each Ethernet Subnet is called a VLAN (Virtual Local Area Network). All switch ports in the same Ethernet Subnet are in the same VLAN.

VLAN ID – Each VLAN has an ID, which is a number between 1 and 4095. Thus a Switch can have Switch Ports in up to 4095 different Ethernet Subnets.

Switch Port VLAN configuration – a Switch administrator assigns each switch port to a VLAN ID. By default, Switch Ports are in VLAN 1 and shutdown. The Switch administrator must specify the VLAN ID and enable (unshut) the Switch Port.

Pure Layer 2 Switches don’t route – When a Switch receives a packet for a port in VLAN 10, it only switches the packet to other Switch Ports that are also in VLAN 10. Pure Layer 2 Switches do not route (forward) packets between VLANs.

Some Switches can route – Some Switches have routing functionality (Layer 3). The Layer 3 Switch has IP addresses on multiple Ethernet subnets (one IP address with MAC address for each subnet). The client machine has the Default Gateway set to the Switch’s IP address. When Ethernet packets are sent to the Switch’s MAC address, the Layer 3 Switch forwards (routes) the packets to a different IP subnet.

DHCP (Dynamic Host Configuration Protocol)

Static IP Addresses or DHCP (Dynamic) IP Addresses – Before a machine can communicate on a network, the machine needs an IP address. The IP address can be assigned statically by the administrator, or the machine can get an IP address from a DHCP Server. Most client machines use DHCP by default. DHCP is usually required for virtual desktops and non-persistent XenApp servers.

DHCP Process – When a DHCP-enabled machine boots, it sends a DHCP Request broadcast packet asking for an IP address. A DHCP server sees the DHCP IP address request, and sends back a DHCP reply with an IP address in it. DHCP servers keep track of which IP addresses are available and try to avoid IP conflicts.

DHCP Request doesn’t cross routers – The DHCP Request broadcast is Layer 2 (Ethernet) only, and won’t cross Layer 3 boundaries (routers).

DHCP Server on same subnet – If the DHCP server is on the same subnet as the DHCP client, then no problem.  But this is rarely the case.

DHCP Server on remote subnet – If the DHCP server is on a different subnet, then the local router needs to forward the DHCP request to the remote DHCP server. The local router must be configured to listen for DHCP requests. To enable DHCP forwarding on a subnet, ask the networking team to configure the subnet’s router (default gateway) with IP Helper Address or DHCP Proxy/Fowarder.

DHCP Server provides the Default Gateway – When a DHCP server sends an IP address to the client, the DHCP server also sends the Default Gateway IP address. This allows the client machine to communicate both Layer 2 and Layer 3.

DHCP Scopes – A single DHCP server can hand out IP addresses to multiple subnets. Each subnet is a different DHCP Scope. The scope configuration and list of issued IP addresses are stored in a database.

DHCP Server Redundancy – If the DHCP Server is down, then DHCP Clients cannot get an IP address when they boot, and thus can’t communicate on the network. You typically need at least two DHCP Servers. However, the DHCP database is usually stored locally on each DHCP Server, so you need some mechanism to replicate the database to each DHCP Server. Windows Server 2012 and newer have a DHCP Database replication capability. As do other DHCP servers like Infoblox.

DNS (Domain Name Server)

DNS converts words to numbers – When users use a browser to visit a website, the user enters a human-readable, word-based address. However, machines can’t communicate using words, so these words must first be converted to a numeric address. That’s the role of DNS.

DNS Client – Every client machine has a DNS Client. The DNS Client talks to DNS Servers to convert word-based addresses (DNS names) into number-based addresses (IP addresses).

DNS Query – The DNS Client sends a DNS Query, which is a word-based address, to a DNS Server. The DNS Server sends back an IP Address.

DNS Servers configured on client machine – On every client machine, you specify which DNS Servers the DNS Client should use to resolve DNS names into IP addresses. You enter the IP addresses of two or more DNS Servers.

  • DHCP can deliver DNS Server addresses – These DNS Server IP addresses can also be delivered by the DHCP Server when the DHCP Client requests an IP address.

DNS scalability – The Internet has billions of IP addresses. Each of these IP addresses has a unique DNS name associated with it. It would be impossible for a single DNS server to have a single database with every DNS name contained within it. To handle this scalability problem, DNS names are split into a hierarchy, with different DNS servers handling different portions of the hierarchy. The DNS hierarchy is a tree structure, with the root on top, and leaves (DNS records) on the bottom.

DNS names and DNS hierarchy – A typical DNS name has multiple words separated by periods. For example, www.google.com. Each word is handled by a different portion of the DNS hierarchy.

Root of the DNS tree – The root portion of the DNS tree is handled by many DNS servers hosted worldwide. The root DNS servers are usually owned and operated by government agencies, or large service providers.

  • DNS Root Hints – The list of IP Addresses for the root DNS servers is hard coded into every DNS server. This list of root DNS servers is sometimes called Root Hints.

Walk the DNS tree – it’s critical that you understand this process:

  1. Implicit period (root) – DNS names are read from right to left. At the end of www.google.com is an implicit period. So the last character of every DNS name is a period, which represents the top (root) of the DNS tree.
  2. Next is .com. The root DNS Servers have a link to the .com DNS Servers. When a .com DNS name needs to be resolved, you first ask the root servers for the IP addresses of the DNS Servers that know about .com addresses. These .com DNS Servers are usually owned and maintained by the Internet Domain Registrars.
  3. Next is google.com. The .com servers have a link to the google.com DNS Servers. When a google.com DNS name needs to be resolved, you ask a .com DNS server for the IP addresses of the DNS servers that know about google.com addresses.
  4. Finally, you ask the google.com DNS Servers to resolve www.google.com into an IP address. The google.com DNS Servers can resolve www.google.com directly without linking to any other DNS Server.

Local DNS Servers – DNS Clients do not resolve DNS names themselves. Instead, they send the DNS Query to one of its configured DNS Servers, and the DNS Server resolves the DNS Name into an IP address. The DNS Server IP addresses configured on the DNS Client are sometimes called Local DNS Servers and/or Resolvers.

Recursive queries – A DNS Server can be configured to perform recursive queries. When a DNS Client sends a DNS Query to a DNS Server, if the DNS Server can’t resolve the address using it’s local database, then the recursive DNS Server will walk the DNS tree to get the answer. If Recursion was not enabled, then the DNS server would simply send back an error (or a link) to the DNS client.

DNS Caching – Resolved DNS queries are cached for a configurable period of time. This DNS cache exists on both the Resolver/Recursive DNS Server, and on the DNS Client. The caching time is defined by the TTL (Time-to-live) field of the DNS record. When a DNS Client needs to resolve the same DNS name again, it simply looks in its cache for the IP address, and thus doesn’t need to ask the DNS Resolver Server again. If two DNS Clients are configured to use the same Local DNS Servers/Resolvers, when a second DNS Client needs to resolve the same DNS name that the first DNS Client already resolved, the DNS Resolver Server simply looks in its cache and sends back the response, and there’s no reason to walk the DNS tree again, at least not until the TTL expires.

DNS is not in the data path – Once a DNS name has been resolved into an IP Address, DNS is done. The traffic is now between the user’s client software (e.g. web browser), and the IP address. DNS is not in the data path. It’s critical that you understand this, because this is the source of much confusion when configuring NetScaler GSLB.

FQDN – When a DNS name is shown as multiple words separated by periods, this is called a Fully Qualified Domain Name (FQDN).

DNS Suffixes – But you can also sometimes just enter the left word of a DNS name and leave off the rest. In this case, the DNS Client will append a DNS Suffix to the single word, thus creating a FQDN, and send the FQDN to the DNS Resolver to get an IP address. A DNS Client can be configured with multiple DNS Suffixes, and the DNS Client will try each of the suffixes in order until it finds one that works. When you ping the single word address, ping will show you the FQDN that it used to get an IP address.

Authoritative DNS Servers – A small portion of the DNS hierarchy/tree is stored on one or more DNS servers. These DNS servers are considered “authoritative” for this portion of the DNS tree. When you send a DNS Query to a DNS Server that has the actual DNS records in its configuration, the DNS Server will send back the IP Address, and flag the response as “authoritative”. But when you send a DNS query to a DNS Resolver that doesn’t have google.com‘s DNS records in its local database, the DNS Resolver will get the answer from google.com‘s DNS servers, and flags the IP Address response as “non-authoritative”. The only way to get an “authoritative” response for www.google.com is to ask the google.com‘s DNS servers directly.

  • DNS Zones – The portion of the DNS tree hosted on an authoritative DNS server is called the DNS Zone. A single DNS server can host multiple DNS Zones. DNS Zones typically contain only a single domain name (e.g. google.com). If DNS records for both company.com and corp.com are hosted on the same DNS server, then these are two separate zones.
  • Zone Files – DNS records need to be stored somewhere. On UNIX/Linux DNS servers, DNS records are stored in text files, which are called Zone Files. Microsoft DNS servers might store DNS records inside of Active Directory instead of in files.

DNS records – Different types of DNS records can be created on authoritative DNS servers:

  • A (host) – this is the most common type of record. It’s simply a mapping of one FQDN to one IP address. If you create multiple Host records (A records) with the same FQDN, but different IP addresses, then you are essentially configuring DNS Round Robin load balancing.
  • CNAME (alias) – this record maps (aliases) one FQDN into another FQDN. CNAMEs allows you to put the IP address into one A record, and have other CNAME records that map to the one A record. Then whenever you update the IP address in the A record, all of the CNAMEs start resolving to the new IP address. Otherwise, if you had multiple A records for each FQDN pointing to the same IP address, you’d have to update the IP address in each of the A records.
  • NS (name server, for delegation) – DNS Resolvers use NS records to enumerate every DNS server that is authoritative for a DNS Zone. This record is important for GSLB configurations.

Resolving a CNAME – While the DNS Resolver is walking the tree, a CNAME might be returned instead of an IP address. However, the DNS Resolver’s job is to return an IP address, not a CNAME, so the Resolver has to start over again with walking the DNS tree to resolve the CNAME into an IP address. If the DNS Resolver gets another CNAME, then it starts over again until it finally gets an IP Address.

  • CNAME is not a redirect –  The FQDN in the user’s address bar doesn’t change. The ultimate response from the DNS Resolver is still just an IP address.
  • CNAMEs and NetScaler GSLB – CNAMEs are typically used in NetScaler GSLB configurations. CNAMEs are one method of delegating resolution of a FQDN to a NetScaler.

NS records and DNS delegation – NS records can also be used to delegate sub-trees to other DNS servers. For example, google.com can delegate gslb.google.com to other DNS servers. In that case, in google.com, you create NS records named gslb.google.com that point to the IP addresses of two or more other DNS servers (or NetScaler appliances running GSLB).

Physical Networking

Layer 1 (Physical cables)

NetScalers connect to network switches using several types of media (cables).

Gigabit cables are usually copper CAT6 twisted pair with 8-wire RJ-45 connectors on both sides

10 Gigabit or higher cables are usually fiber optic cables with SC connectors on both sides.

Transceivers (SFP, SFP+)

  • Transceivers convert optical to electrical and vice versa – To connect a fiber optic cable between two network ports, you must first insert a transceiver into the switch ports. The transceiver converts the electrical signals from the switch or NetScaler into optical (laser) signals. The transceiver then converts the laser signals to electrical signals on the other side.
  • Transceivers are pluggable – just insert them. Because they are pluggable, you can insert different types of transceivers into different switch/NIC ports. Some switch ports might be fiber, while others might be copper.
  • Different types of transceiver – SFP transceivers only work up to gigabit speeds. For 10 Gig, you need SFP+ transceivers.

For cheaper 10 Gig connections, Cisco offers Direct Attach Copper (DAC) cables:

  • Transceivers are built into both sides of the cable so you don’t have to buy them separately.
  • The cables are based on Copper Twinax. Copper means cheaper metal, and cheaper transceivers, than optical fiber.
  • The cables are short distance (e.g. 5 meters). For longer than 10 meters, you must use optical fiber instead.

Port Channel (cable bonds)

Bonding – Two or more cables can be bound together to look like one cable. This increases bandwidth, and increases reliability. If you bond 4 Gigabit cables together, you get 4 Gigabit of bandwidth instead of just 1 Gigabit of bandwidth. If one of those cables stops working for any reason, then traffic can use the other 3 cables.

Network impact of Cable Bonding – Cable bonding does not affect networking in any way. Ethernet and IP routing don’t care if there’s one cable, or if there are multiple cables bonded into a single link. However, if you connect multiple cables to the same VLAN without bonding, then that definitely impacts both Ethernet and IP routing. Don’t connect multiple cables to one VLAN unless you bond those cables.

Various Names for Cable Bonding – On Cisco switches, cable bonding functionality has several names. Probably the most common name is “port channel”. Other names include: “link aggregation”, “port aggregation”, “switch-assisted teaming”, and “Etherchannel”.

Bonding Configuration – to bond cables together, you must configure both sides of the connection identically. You configure the switch to bond cables. And you configure the NetScaler (or server) to bond cables. On NetScaler, the feature is called Channel. On NetScaler, a Channel is represented by a new interface called LA/1 or something like that. LA = Link Aggregate. If you want to bond cables on a NetScaler, then ask the switch administrator to configure the switch side first.

ARP to a single MAC on multiple NICs – Each cable is plugged into a NIC. Each NIC has its own MAC Address. An IP Address can only be ARP’d to a single MAC address, which means the incoming traffic only goes to one of the cables. To get around this problem, when a port channel (bond) is configured, a single MAC address is shared by all of the cables in the bond, and both sides of the cable bond know that the single MAC address is reachable on all members of the cable bond.

Load Balancing across the bond members – The Ethernet switch and the Netscaler will essentially load balance traffic across all members of the bond. There are several port channel load balancing algorithms. But the most common algorithm is based on source IP and destination IP. All packets that match the same source and destination will go down the same cable. Packets with other combinations of source and destination might go down a different cable. If you are bonding Gigabit cables, since a single source/destination connection only goes down one cable, it can only use up to 1 Gigabit of throughput. Bonds only provide increased bandwidth if there are many source/destination combinations.

LACP – Cables can be bonded together manually, or automatically. LACP is a protocol that allows the two sides (switch and NetScaler) of the bonded connection to negotiate which cables are in the bond, and which cables aren’t. LACP is not the actual bonding feature; instead, LACP is merely a negotiation protocol to make bonding configuration easier.

Multi-switch Port Channels – Port Channels (bonds) are usually only supported between one switch and one NetScaler. To bond ports from one NetScaler to multiple switches, you configure something called Multi-chassis Port Channel. Multi-chassis refers to the multiple switches. You almost always want multi-chassis since that lets you survive a switch failure.

  • Virtual Port Channel – On Cisco NX-OS switches, the multi-chassis port channel feature is called “virtual port channel”, or vPC for short. vPC requires LACP to be configured on both sides. When connecting a single Port Channel to multiple Nexus switches, ask the network team to create a “virtual port channel”.
  • Stacked Switches – Other switches support a “stacked” configuration where multiple switches look like one switch. There are usually cables in the back of the switches that connects them together. This multi-Chassis port channel doesn’t usually need LACP, but there’s no harm in enabling LACP.

Configure Manual Channel On NetScaler – to create a “manual” channel, you go to System > Network > Channels, create a channel, select the channel interface name (e.g. LA/2), and add the interfaces.

Configure LACP Channel on NetScaler – To create a LACP channel, go to System > Network > Interfaces, double-click a member interface, scroll down, check the box to enable LACP, and enter a “key” (e.g. 1). All members of the same channel must have the same key. If you enter “1” as the key, then a new interface named LA/1 is created, where the “1” = the LACP key.

  • Channel Number on NetScaler can be different than the Channel Number on the Switch – The LACP “key” configured on the NetScaler does not need to match the port channel number on the switch side. NetScalers typically have Channels named LA/1, LA/2, etc., while Switches can have port channel interfaces named po281, po282, etc.

VLAN tagging

VLAN review – Earlier, I mentioned that switches can support multiple Ethernet subnets, and each of these Ethernet subnets is a different VLAN. Each switch port is configured to belong to a particular VLAN. Ports in the same VLAN use Ethernet to communicate with each other. Ports in separate VLANs use routers to communicate with each other.

Multiple VLANs on one port – Switches can also be configured to allow a single switch port to be connected to multiple VLANs. A NetScaler usually needs to be connected to multiple subnets (VLANs), as detailed later. You can either assign each VLAN to a separate cable, or you can combine multiple VLANs/subnets onto a single cable.

VLAN tagging – If a switch port supports multiple VLANs, when a packet is received by the switch port, the switch needs some sort of identifier to know which VLAN the packet is on. A VLAN tag is added to the Ethernet packet, where the VLAN tag matches the VLAN ID configured on the switch.

  • Tags are added and removed on both sides of the switch cable – The NetScaler adds the VLAN tag to packets sent to the switch. The switch removes the tag and switches the packet to other switch ports in the same VLAN. When packets are switched to the NetScaler, the switch adds the VLAN tag so NetScaler knows which VLAN the packet came from.

Trunk Port vs Access Port – When multiple VLANs are configured on a single switch port, this is called a Trunk Port. When a single switch port only allows one VLAN (without tagging), this is called an Access Port. Switch ports default as Access Ports, unless a switch administrator specifically configures it as a Trunk Port. Access Ports don’t need VLAN tagging, but Trunk Ports do need VLAN tagging. When you want multiple VLANs on a single switch port, ask the networking team to configure a trunk port.

  • Trunk Ports and VLAN ID tagging – when a switch port is configured as a Trunk Port, by default, every VLAN assigned to that Trunk Port requires VLAN ID tagging. The NetScaler must be configured to add and remove the same VLAN ID tags that the switch is expecting.
  • Trunk Ports and Native VLAN – One of the VLANs assigned to the Trunk Port can be untagged. This untagged VLAN is called the native VLAN. Only one VLAN can be untagged. Native VLAN is an optional configuration. Some switch administrators, for security reasons, will not configure an untagged VLAN (native VLAN) on a Trunk Port. If untagged VLANs are not allowed on the switch, then you must configure the NetScaler to tag every packet, as detailed later. If there’s a native VLAN, then some NetScaler configuration (e.g. NSIP) is simplified, as detailed later.

Trunk Ports reduce the number of cables – if you had to connect a different cable (or Port Channel) from NetScaler for each VLAN, then the number of cables (and switch ports) can quickly get out of hand. The purpose of Trunk Ports is to reduce the number of cables.

Trunk Ports and Port Channels are separate features – If you want to bond multiple cables together, then you configure a Port Channel. If you want multiple VLANs on a single cable or Port Channel, then you configure a Trunk Port. These are two completely separate features.

Trunk Ports and Routing are separate features – Configuring a Trunk port with multiple VLANs does not automatically enable routing between those VLANs. Each VLAN on the Trunk Port is a separate Layer 2 Ethernet broadcast domain, and they can’t communicate with each other without routing. Routing is configured in a separate part of the Layer 3 switch, or on a separate router device. In other words, Trunk Ports are unrelated to routing.

Multiple NICs in one machine

A single machine (e.g. NetScaler) can have multiple NICs, which means multiple cables.

Single VLAN/subnet does not need VLAN configuration on the NetScaler – if the NetScaler is only connected to one subnet/VLAN, then no special configuration is needed. Just create the NSIP, SNIP, and VIPs in the same IP subnet. You can optionally bond multiple cables into a Port Channel for redundancy and increased bandwidth.

Two or more NICs to one VLAN requires bonding – If two or more NICs are connected to the same VLAN, then the NICs must be bonded together into a Port Channel. Port Channels require identical configuration on the switch side and on the NetScaler side. If you don’t bond them together, then you run the risk of bridging loops and/or MAC moves.

Multiple VLANs/subnets requires VLAN configuration on the NetScaler – if a NetScaler is connected to multiple IP subnets, then the NetScaler must be configured to identify which subnet is on which NIC. On the NetScaler, for each IP subnet, you create a Subnet IP address (SNIP). Then you create a VLAN object, bind it to an interface (or Port Channel), and bind a subnet IP address (SNIP) to the VLAN object, so NetScaler knows which IP addresses are on which VLAN and interface.

    • VLAN objects are required on a multi-homed NetScaler, even if VLAN tagging is not needed – If a NetScaler is connected to two subnets, it doesn’t matter if VLAN tagging is required or not; the VLAN objects still must be defined on the NetScaler, so the NetScaler can link IP subnets with interfaces.
    • VLAN Tagging – You specify the VLAN ID when creating the VLAN object. If the switch port is a Trunk Port, then there’s a checkbox in the VLAN object to enable tagging. If the switch Trunk Port is configured with a native VLAN, then one of the VLANs bound to the Interface/Channel can be untagged.
    • If VLAN tagging is not needed, then the VLAN ID configured on the NetScaler doesn’t have to match the switch’s VLAN ID – If the VLAN is not tagged, then the entered VLAN ID on NetScaler is only locally significant, and doesn’t have to match the switch’s VLAN ID. However, it’s easier to troubleshoot if the VLAN ID’s match.

Routing table – When you create a SNIP/VLAN on a NetScaler, a “direct” connection is added to the routing table. You can view the routing table at System > Network > Routes. “Direct” means the NetScaler has a Layer 2 connection to the IP Subnet.

One Default Route – the routing table usually has a route 0.0.0.0 that points to the Default Gateway/Router. There can only be one default route on a device. The NetScaler can send Layer 2 packets out any directly connected interface/VLAN, but Layer 3 packets only go out the one default route, which is on only one VLAN.

Static Routes to override Default Route – To use routers on a different subnet than the default route, you add static routes to the routing table. To add a static route, you specify the destination subnet you are trying to reach, and the router (Next Hop or Gateway) you want to use to reach that destination. The Next Hop address must be on one of the VLANs that the NetScaler is connected to.

How Source IP is chosen – when the multi-VLAN NetScaler wants to send a packet to a remote subnet (not directly connected) through a router, the NetScaler first looks in its routing table for the next hop address. The NetScaler must have a SNIP address on the same subnet as the next hop address. This subnet-specific IP address is used as the Source IP for the Layer 3 packet. The destination machine sends the reply to this subnet-specific Source IP.

NetScaler Networking

Traffic flow through NetScaler

VIPs (Virtual IP) – VIPs receive traffic. When you create a Virtual Server (e.g. Load Balancing Virtual Server), you specify a Virtual IP address (VIP). This VIP listens for traffic from clients. You also specify a port number to listen on.

SNIPs (Subnet IP) – SNIPs are the Source IP when NetScaler sends traffic to a web server. When NetScalers need to send a packet, they look in the routing table for the next hop address, and select a SNIP on the same subnet as the next hop. This SNIP is inserted into the packet as the Source IP. The web servers reply to the SNIP.

Load Balancing traffic – simplified

  • VIP/Virtual Server – Clients send traffic to a VIP.
  • Services – Bound to the Load Balancing Virtual Server are one or more Load Balancing Services (or Service Group). These Load Balancing Services define the web server IP address and the web server port number. NetScaler chooses one of the Load Balancing services, and forwards the HTTP request to it.
  • Monitors – NetScaler should not send traffic to a web server unless that web server is healthy. Monitors periodically send health check probes to web servers.

NetScaler Source IP

SNIP replaces Client IP – When NetScaler communicates with a back-end web server, the source IP is a SNIP. The web server does not see the original Client IP address. Essentially, the source IP address in the original HTTP packet was changed from the Client IP to the SNIP. On other load balancers, this is sometimes called Source NAT.

If SNIP is the source IP, how to log the original Client IP? – Since web servers behind a NetScaler only see the SNIP, the HTTP entries in the web server access logs (e.g. IIS log) all come from the same SNIP. If the web server needs to see the real Client IP, then NetScaler has two options: insert the Client IP into a HTTP Request Header, or configure NetScaler to not use a SNIP.

Client IP Header Insertion – when you create a Load Balancing Service, there’s a checkbox to insert the real client IP into a user-defined HTTP Header. This Header is typically named X-Forwarded-For, or Real IP, or Client IP, or something like that. The web server then needs to be configured to extract the custom HTTP header and log it. The packets on the wire still have a SNIP as the Source IP.

USIP – The default mode for NetScaler is Use Subnet IP (USNIP). This can be changed to Use Source IP (USIP), which leaves the original Source IP (Client IP) in the packets. When web servers respond, they send the reply to the Client IP, and not the SNIP. If the Response does not go through the NetScaler, then NetScaler is only seeing half of the conversation, which breaks many NetScaler features. If you need USIP mode, then reconfigure the default gateway on the web servers to point to a NetScaler SNIP. When the web server replies to the Client IP, it will send the reply packet to its default gateway, which is a NetScaler SNIP, thus allowing NetScaler to see the entire conversation. USIP can be enabled globally for all new Load Balancing Services, or can be enabled on specific Load Balancing Services, so you can use SNIP for some web servers and USIP for others.

NetScaler networking design questions

Dedicated management VLAN? – Do you want to put the NetScaler Management IP (NSIP) on a dedicated Management VLAN? If so, then the NetScaler needs to be connected to the Management VLAN.

  • Dedicated cable? – Is the management VLAN on its own cable? Or is it on a Trunk Port with other VLANs?
  • Access Port – If the management VLAN is on a dedicated cable, then configure that switch port as an Access Port so the NSIP VLAN is not tagged.
  • Native VLAN on Trunk Port – If the management VLAN is on a Trunk Port, it’s easiest if the NSIP VLAN is untagged, so configure the NSIP VLAN as the native VLAN
  • Tagged management VLAN? – Or does the network team require the management VLAN to be tagged? If so, then NetScaler will need special NSVLAN configuration.

Interface 0/1 is only for management – Dedicated management VLANs are usually connected to interface 0/1 on the NetScaler. If you don’t have a dedicated management VLAN, then don’t use interface 0/1 on physical NetScalers and instead, use interfaces 1/1 and higher. That’s because interface 0/1 is not optimized for high-throughput traffic.

What VLAN do you want the VIPs to be on? – one VLAN? Multiple VLANs? Clients send traffic to VIPs. For public-facing VIPs, they are typically created on a DMZ VLAN. You must connect the NetScalers to all VLANs that host NetScaler VIPs. In other words, the NetScaler must be Layer 2 connected to any VLAN where you want to create a NetScaler VIP.

Do you want the NetScaler to be Layer 2 connected to the web server VLANs? – there usually is no requirement for NetScaler to be Layer 2 connected to the web servers, since NetScaler can use a router to reach the web servers on remote subnets.

  • Web Servers reply to SNIP – In USNIP mode (the default), Web Servers reply to the packet’s Source IP, which is a NetScaler SNIP. This is sometimes called one-arm mode, because there’s no need to change any of the networking on the web servers. The Default Gateway on the Web Servers does not need to be changed. The Web Severs do not need to be moved to any other VLAN.
  • SNIP as Web Server Default Gateway – Some load balancing architectures require the web servers to use a NetScaler SNIP as their default gateway. This is either an older architecture, or an advanced architecture. In this case, the NetScaler SNIP would need to be on the same subnet as the web servers, and thus the NetScaler needs to be connected to the web server VLAN. This is sometimes called two-arm mode.
  • One-arm vs two-arm is unrelated to the number of VLANs – One-arm and two-arm has nothing to do with the number of VLANs a NetScaler is connected to. With one-arm, the Source IP of the packets is changed to a NetScaler SNIP, thus no networking changes are needed on the web servers. With two-arm, the source IP of the packets is not changed, but the web server replies need to get to the NetScaler, so the web server Default Gateway is changed to a NetScaler SNIP.

Which networking connections need redundancy? – plug in two or more cables and bond them (Port Channel)

  • Port Channel across multiple switches? – on Cisco NX-OS, configure “virtual port channel”. LACP is required on the NetScaler.

Will you combine multiple VLANs onto a single Interface/Channel? – if so, then configure the switch port or channel as a Trunk Port.

Which VLAN will host the default route (default gateway)? – The default route is usually through a router on the DMZ VLAN, which allows the NetScaler to send replies to any Internet IP address.

  • Static routes for internal subnets – for internal subnets, create static routes that use an internal router as next hop address. Instead of adding a static route for every single internal subnet (e.g. 10.10.5.0/24, 10.10.6.0/24, etc.), can you summarize the internal networks (e.g. 10.0.0.0/8)? Ask the networking team for assistance.
  • PBR for dedicated management interface – When an internal machine connects to the NSIP, it sends a packet that eventually goes through a router that is connected to the dedicated management VLAN. When the NSIP replies, it should send that reply back to the same management router. However, your default route is probably on the DMZ network, and you probably have static routes for internal subnets that use a router on a different data VLAN. To send the replies correctly, configure the NetScaler with a Policy Based Route that causes all packets with Source IP = NSIP to use a management VLAN router as the next hop address. PBR also fixes routing issues for traffic that is sourced from the NSIP (LDAP, NTP, Syslog, etc.)
  • MBF? – Another option for the management network is Mac Based Forwarding, which keeps track of which interface a packet came in on, and replies out the same interface. This works for replies from the NSIP, but doesn’t do anything for traffic that is sourced by the NSIP (LDAP, NTP, Syslog, etc.)
  • Multiple Internet Circuits – If clients connect to the NetScaler VIPs through multiple Internet circuits, then you probably want replies to go back out the same way they came in. This won’t work if you only have a single default route. The easiest way to enable this is to enable Mac Based Forwarding (MBF), which keeps track of which interface/router a client request came in on, and replies out the same interface/router. You can combine MBF with PBR for the NSIP-sourced traffic.

Is the NetScaler connected to multiple VLANs/Subnets? – if so, then you must configure VLANs on the NetScaler. VLAN objects are required on the NetScaler whether you need VLAN tagging or not.

A NetScaler might be connected to a single VLAN – this is the easiest configuration. Just create NSIP, SNIP, and VIPs in the same IP subnet. No special NetScaler networking configuration required.

NetScaler Forwarding Tables

NetScaler has at least three tables for choosing how to forward a packet. They are listed below in priority order (MBF overrides PBR, which overrides routing table)

  • Mac Based Forwarding (MBF) – keeps track of which interface/router a client request came in on, and replies out the same interface/router. Only works for replies. Since it overrides routing tables, MBF is usually discouraged unless absolutely necessary.
  • Policy Based Route (PBR) – chooses a next hop address based on information in the packet (e.g. source IP, source port, destination port). Normal routing only chooses next hop based on destination IP, while PBR can use additional packet fields. PBRs are difficult to maintain, and thus most networking people try to avoid them. But they are sometimes necessary (e.g. dedicated management network).
  • Routing Table – the routes in the routing table come from three sources: SNIPs (directly connected subnets), manually-configured Static Routes (including default route), and Dynamic Routing (OSPF, BGP).

NetScaler networking configuration

NetScaler networking configuration vs Server networking configuration – NetScaler networking is completely different than server networking. NetScaler is configured like a switch, not like a server.

    • Servers assign IPs to NICs – On servers, you configure an IP address directly on each NIC. Most servers only have one NIC.
    • NetScalers are configured like a Layer 3 Switch – On NetScalers, you assign VLANs to interfaces, just like a switch. Then you put NetScaler-owned IP addresses into each of those VLANs, which again, is just like a Layer 3 switch. More specifically, you create VLAN objects, bind the VLAN to the interface/channel, and bind a SNIP to the VLAN.

Disable unused Interfaces – if a NetScaler interface (NIC) does not have a cable, then disable the interface (System > Network > Interfaces, right-click an interface, and Disable). If you don’t disable the unused interfaces, then High Availability (HA) will think the interfaces are down and thus failover.

LACP – if your port channels have LACP enabled, go to System > Network > Interfaces, edit two or more member interfaces, check the box for LACP, and enter the same LACP Key. If you enter 1 as the key, then a channel named LA/1 is created.

For manual port channels, go to System > Network > Channels, add a channel, select LA/1 or similar, and bind the member interfaces.

NSIP is special – NSIP lives in VLAN 1. If you don’t need to tag the management VLAN, then leave NSIP in VLAN 1. VLAN 1 on the NetScaler does not need to match the switch because you’re not tagging the packets with the VLAN ID. When you bind VLANs to the other interfaces, those interfaces are removed from VLAN 1 and put in other VLANs. The remaining interface in VLAN 1 is your management interface.

    • NSVLAN – if your management VLAN is tagged, then normal VLAN tagging configuration won’t work. Instead, you must configure NSVLAN to tag the NSIP/management packets with the VLAN ID. All other VLANs are configured normally as shown next.
    • PBR – if you connected to a dedicated management VLAN/subnet, configure a Policy Based Route (PBR) based on NSIP as Source IPs, and a management VLAN router as the next hop.

VLANs – If the NetScaler is connected to multiple VLANs:

    1. Create a SNIP for each VLAN (except the dedicated management VLAN).
    2. Create a VLAN object for each VLAN, and specify the VLAN ID (same as switch). It doesn’t matter if the VLAN is tagged or not, you still must create a separate VLAN object on NetScaler for each subnet.
    3. Bind the VLAN object to the interface or channel. If the switch needs the VLAN to be tagged, then check the box to tag the packets with the specified VLAN ID.
    4. Bind the VLAN object to the SNIP for that VLAN.

Static Routes – Add static routes for internal subnets through an internal router on a “data” network. The “data” network is usually a high bandwidth connection that is different than the management network.

Change Default Route to DMZ router – now that PBR and Static Routes are configured, you can probably safely delete the default route (0.0.0.0) and recreate it without you losing connection to the NSIP.

Layer 2 Troubleshooting – To verify VLAN connectivity, log into another device on the same VLAN (e.g. router/firewall) and ping the NetScaler SNIP or NSIP. Immediately check the ARP cache to see if the IP address was converted to a MAC address. If not, then layer 2 is not configured correctly somewhere (e.g. VLAN configuration), or there’s a hardware failure (e.g. bad switch port).

Layer 3 Troubleshooting – There are many potential causes of Layer 3 routing issues. A common problem is incorrect Source IP chosen by the NetScaler. To see the Source IP, SSH to the NetScaler, run shell, then run nstcpdump.sh host <Destination_IP>. You should see a list of packets with Source IP/Port and Destination IP/Port. Then work with the firewall and routing teams to troubleshoot packet routing.

NetScaler High Availability (HA)

Disable unused interfaces – All network interfaces on NetScaler by default have HA monitoring enabled. If any enabled interface is down (e.g. cable not connected), then HA will failover. Disable the unused interfaces so HA won’t monitor them any more.

HA heartbeat packets are untagged – Two nodes in a HA pair send heartbeat packets out all interfaces. These heartbeat packets are untagged. If the switch does not allow untagged packets (no native VLAN on a Trunk Port), then some special configuration is required.

  1. On NetScaler, for each Trunk interface/channel, turn off tagging for one VLAN. Don’t worry about the switch configuration. Just do this on the NetScaler side.
  2. On NetScaler, go to System > Network > Interfaces (or Channels), double-click the interface/channel, and enable Tag All VLANs. The VLAN you untagged in step 1 will now be tagged again. As a bonus, HA heartbeat packets will also be tagged with the same VLAN ID you untagged in step 1.
  3. To verify that HA heartbeats are working across all interfaces, SSH to each NetScaler node, and run show ha node. Look for “interfaces on which HA heartbeat packets are not seen”. There should be nothing in the list.

GARP – When an HA pair fails over, the new primary appliance performs a Gratuitous ARP. For two devices on the same subnet (e.g. router and NetScaler) to talk each other, they first perform ARP to convert the IP addresses to MAC addresses. The IP address to MAC address mappings are cached (ARP cache). Each HA NetScaler node has different MAC Addresses. After a failover, the new appliance needs to tell the router to start sending traffic to the new node’s MAC addresses instead of the old node’s MAC addresses. A GARP packet is intended to inform a router to update it’s ARP cache with the new MAC address information. Some routing devices (e.g. firewalls) will not accept GARP packets, and instead will wait for the ARP cache entry to time out. Or the router/firewall might not allow the IP address to move to a different MAC address. If HA failover stops all traffic, work with the router/firewall admin to troubleshoot GARP.

Port Channels and HA failover – A port channel has two or more member interfaces. If one of the member interfaces is down, should the appliance failover? How many member interfaces must fail before HA failover should occur? On NetScaler, double-click the channel, and you can specify a minimum throughput. If bonded throughput falls below this number due to member interface failure, then HA fails over.

Fail Safe – If at least one interface is down on both HA nodes, then HA will be unhealthy, and both nodes will stop responding. You can configure one of the HA nodes as the Fail Safe node so that at least one of the HA nodes will be up, even if not every interface is functional.

Firewalls

DMZ – public facing NetScaler VIPs should be on a DMZ VLAN that is sandwiched between two firewalls. That means the NetScaler must be connected to the DMZ.

  • Firewalls can route – When you connect a NetScaler to a DMZ, the firewall is usually the router.
  • NAT – Most DMZ VLANs use private IPs (10.0.0.0/8, 172.16.0.0/20, 192.168.0.0/16) instead of public IPs. These private IP addresses are not routable across the Internet. To make them accessible, you NAT a company-owned public IP to the private DMZ IP. Ask the firewall administrator to configure the NAT translations for each publicly-accessible DMZ VIP.

Internal VIPs – internal VIPs (accessed by internal users) should be on an internal VLAN (not in the DMZ).

Multiple security zones – If you connect a single NetScaler to both DMZ and Internal, here’s how the traffic flows:

  1. Client connects to DMZ VIP, which goes through the firewall that separates the Internet from the DMZ.
  2. NetScaler internal SNIP connects to internal server. Since the NetScaler is connected to the Internal network, NetScaler will use an internal SNIP for this traffic. If you have a firewall between DMZ and internal, that firewall has now been bypassed.

Separate NetScaler appliances for DMZ and internal – Bypassing the DMZ-to-internal firewall is usually not what security teams want. Ask Security for their opinion on this architecture. A more secure approach is to have different NetScaler appliances for DMZ and internal. The DMZ appliance is connected only to DMZ (except dedicated management VLAN). When the DMZ NetScaler needs to communicate with an internal server, the DMZ NetScaler uses a DMZ SNIP to send the packet to the DMZ-to-internal firewall. The DMZ-to-internal firewall inspects the traffic, and forwards it if the firewall rules allow. The firewall rule allows the DMZ SNIP to talk to the web server, but the firewall does not allow client IPs (on the Internet) to talk directly to the web server.

Traffic Isolation – NetScaler has some features that can isolate traffic on a single appliance:

  • Net Profiles – allows you to specify a particular SNIP to be used by a vServer, or Service. Firewalls can allow different SNIPs to access different web servers.
  • Traffic Domains – each Traffic Domain is a different routing table. Put different NetScaler objects in different Traffic Domains. Not all NetScaler features are supported.
  • Partitions – carve up an MPX/VPX appliance into different partitions, with each partition having access to a subset of the hardware. Each partition is essentially a separate NetScaler config, which means separate routing tables. However, not all NetScaler features work in a partition.
  • NetScaler SDX – carve up physical hardware into multiple virtual machines. Each VM is a full NetScaler VPX, each with its own configuration. No feature limitations.

Network firewall (Layer 4) vs NetScaler Web App Firewall (Layer 7) – Most network firewalls only filter on port numbers and IP addresses. A few of them can filter on HTTP packet contents.

  • NetScaler WAF vs next-gen network firewalls – NetScaler has a security feature called Web App Firewall (WAF), which does HTTP inspection/filtering. HTTP packet inspection on next-gen network firewalls is usually signature based, but NetScaler WAF can also be configured with a whitelist to only allow HTTP packets that match the whitelist.
  • Put network firewalls in front of NetScalers – NetScaler is not a layer 4 firewall like a Cisco ASA or Palo Alto. Thus you should always put a network firewall in front of your NetScaler, even if you enabled the NetScaler WAF feature.

Next Step

EUC Weekly Digest – June 17, 2017

Last Modified: Jun 17, 2017 @ 7:31 am

Here are some EUC items I found interesting last week. For more immediate updates, follow me at http://twitter.com/cstalhood.

For a list of updates at carlstalhood.com, see the Detailed Change Log.

 

XenApp/XenDesktop

VDA

App Layering (Unidesk)

Director/Monitoring

Provisioning Services

Receiver

NetScaler

XenMobile

VMware

EUC Weekly Digest – June 10, 2017

Last Modified: Jun 10, 2017 @ 7:17 am

Here are some EUC items I found interesting last week. For more immediate updates, follow me at http://twitter.com/cstalhood.

For a list of updates at carlstalhood.com, see the Detailed Change Log.

 

XenApp/XenDesktop

WEM/Profile Management

Provisioning Services

App Layering / Unidesk

Receiver

NetScaler

NetScaler MAS

XenMobile

VMware

EUC Weekly Digest – June 3, 2017

Last Modified: Jun 3, 2017 @ 6:54 am

Here are some EUC items I found interesting last week. For more immediate updates, follow me at http://twitter.com/cstalhood.

For a list of updates at carlstalhood.com, see the Detailed Change Log.

 

Citrix

XenApp/XenDesktop

Director/Monitoring

WEM/Profile Management

NetScaler MAS

XenMobile

XenServer

Citrix Cloud

Microsoft

Other

Site Updates – May 2017

Last Modified: Jun 1, 2017 @ 5:05 pm

To trigger RSS Feed, Mailing List, etc., here is the May 2017 excerpt from the Detailed Change Log.

Session Recording 7.14

Last Modified: Jun 17, 2017 @ 12:18 pm

Navigation

This article applies to Session Recording 7.14 and newer. Session Recording 7.13 and older is a different article.

💡 = Recently Updated

Planning

Citrix links:

Licensing – XenApp/XenDesktop Platinum Edition licensing is required.

Features – CTX224231 Session Recording:Features by Version.  💡

Farms – There is no relation between Session Recording farms and XenApp/XenDesktop farms. You can have Agents from multiple XenApp/XenDesktop farms recording to a common Session Recording server. Or you can split a XenApp/XenDesktop farm so that different Agents point to different Session Recording servers.

Disk space – The Session Recording server will need a hard drive to store the recordings. Disk access is primarily writes. You can also store recordings on a UNC path (this is required if load balancing).

Offloaded content (e.g. HDX Flash, Lync webcam, MMR) is not recorded.

Certificate – Session Recording server needs a certificate. The certificate must be trusted by Agents and Players. Internal Certificate Authority recommended.

  • If load balancing, on the NetScaler, install a certificate that matches the load balanced name.
  • On each Session Recording server, install a certificate that matches the Session Recording server name.

SQL:

  • Supported Versions = SQL 2008 R2 Service Pack 3 through SQL 2016.
  • The SQL database is very small.
  • The database name defaults to CitrixSessionRecording and can be changed.
  • A separate database is created for CitrixSessionRecordingLogging.
  • Temporary sysadmin (or dbcreator and securityadmin) permissions are needed to create the database, and sysadmin can be revoked after installation.
  • SQL Browser Service must be running.
  • SQL Server High Availability (AlwaysOn Availability Groups, Clustering, Mirroring) is supported. See Install Session Recording with database high availability at Citrix Docs. And see Citrix Blog Post Session Recording 7.13 – New HA and Database Options

Installation media – Session Recording 7.14 is installed from the XenApp 7.14 / XenDesktop 7.14 ISO:

Session Recording Server Upgrade

You can upgrade from Session Recording 7.6 and newer.

  1. If this is a new installation, skip to Install.
  2. If this server is Windows 2012 or newer, then go to the downloaded XenApp/XenDesktop 7.14 ISO, and run AutoSelect.exe.
  3. If you see the Manage your delivery screen, click either XenApp or XenDesktop. The only difference is the product name shown in the installers.
  4. On the bottom right, click the Session Recording box.
  5. In the Licensing Agreement page, change the selection to I have read, understand, and accept the terms, and click Next.
  6. In the Core Components page, uncheck the box next to Session Recording Player. The Player is typically installed on physical workstations, but not on the Session Recording server. Click Next.
  7. In the Summary page, click Install.
  8. Click Close when prompted to restart.
  9. After reboot and login, if installation doesn’t continue automatically, then mount the XenApp/XenDesktop ISO, run AutoSelect.exe, and click the Session Recording box again. Installation should then continue.
  10. In the Finish page, click Finish.

Session Recording Server Installs

Install

  1. If this server is Windows 2012 or newer, go to the downloaded XenApp/XenDesktop 7.14 ISO, and run AutoSelect.exe.
  2. If you see the Manage your delivery screen, click either XenApp or XenDesktop. The only difference is the product name shown in the installers.
  3. On the bottom right, click the Session Recording box.
  4. In the Licensing Agreement page, change the selection to I have read, understand, and accept the terms, and click Next.
  5. In the Core Components page, uncheck the box next to Session Recording Player. This feature is typically installed on physical workstations, but not on the Session Recording server. Click Next.
  6. In the Features page, on the first Session Recording server, install everything.
  7. On the second Session Recording server (if load balancing), only select Session Recording Server. Click Next.

  8. In the Database and Server page, fill out the fields. Enter the SQL server name. Enter the database name. Enter the computer account for the Session Recording server. Click Test connection. Each load balanced Session Recording server must point to the same database. Click Next.
  9. In the Administrator Logging Configurator page, enter the name of the SQL database, click Test connection, and then click Next.
  10. In the Summary page, click Install.

  11. In the Finish page, click Finish.

IIS Certificate

  1. Use MMC Certificates snap-in (certlm.msc), or IIS, or similar, to request a machine certificate.
  2. In IIS Manager, right-click the Default Web Site, and click Edit Bindings.
  3. On the right, click Add.
  4. Change the Type to https.
  5. Select the certificate, and click OK.

Session Recording Server Configuration

  1. From Start Menu, run Session Recording Server Properties.
  2. In the Storage tab, specify a path that has disk space to hold the recordings. UNC is supported. If load balancing, UNC is required.

    1. When using a UNC path, make the share allows both Session Recording servers (AD computer objects) to modify files in the path.
    2. The share must have a subfolder. The recordings will be saved to the subfolder.
    3. In the Session Recording Server Properties tool, add the UNC path with subdirectory to the Storage tab.
  3. In the Signing page, select (Browse) a certificate to sign the recordings.
  4. In the Playback tab, notice that Session Recording files are encrypted before transmit. Also, it’s possible to view live sessions but live sessions are not encrypted.
  5. In the Notifications tab, you can change the message displayed to users before recording begins.

  6. The CEIP tab lets you enable or disable the Customer Experience Improvement Program.
  7. See http://www.carlstalhood.com/delivery-controller-7-14-and-licensing/#ceip for additional places where CEIP is enabled.
  8. The Logging tab lets you configure Logging.
  9. When you click OK you’ll be prompted to restart the service.
  10. Session Recording relies on Message Queuing. In busy environments, it might be necessary to increase the Message Queuing storage limits. See CTX209252 Error: “Data lost while recording file…” on Citrix SmartAuditor.


David Ott Session Recording Cleanup Script: You may notice that the session recording entries/files don’t go away on their own. Here is how to clean them up. Just create a scheduled task to run the code below once per day (as system – elevated). See David’s blog post for details.

C:\Program Files\Citrix\SessionRecording\Server\Bin\icldb.exe remove /RETENTION:7 /DELETEFILES /F /S /L

Also see CTX134777 How to Remove Dormant Files From a SmartAuditor Database.

Load Balancing

  1. In SQL Server Management Studio, make sure each load balanced Session Recording server (AD computer account) is granted db_owner role in the Session Recording databases.
  2. On each Session Recording server, open regedit.
  3. Navigate to HKLM\Software\Citrix\SmartAuditor\Server.
  4. Create a new DWORD value named EnableLB and set it to 1. Repeat on both Session Recording servers.
  5. Configure NetScaler load balancing similar to the following:
    add server SR01 10.2.2.78
    add server SR02 10.2.2.139
    add serviceGroup svcgrp-Recording-SSL SSL -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP YES
    add lb vserver lbvip-Recording-SSL SSL 10.2.5.215 443 -persistenceType SOURCEIP -timeout 60 -lbMethod LEASTBANDWIDTH -cltTimeout 180
    bind lb vserver lbvip-Recording-SSL svcgrp-Recording-SSL
    bind serviceGroup svcgrp-Recording-SSL SR01 443
    bind serviceGroup svcgrp-Recording-SSL SR02 443
    bind serviceGroup svcgrp-Recording-SSL -monitorName https
    bind ssl vserver lbvip-Recording-SSL -certkeyName WildcardCorpLocal
  6. The only special part is the Load Balancing Method set to LEASTBANDWIDTH (or LEASTPACKETS).
  7. Create a DNS host record that resolves to the Load Balancing VIP and matches the certificate bound to the vServer.
  8. Go to C:\Windows\System32\msmq\Mapping and edit the file sample_map.xml.
  9. Follow the instructions at Configure Session Recording with load balancing at Citrix Docs. Each Session Recording server has a unique configuration for this file since the <to> element points to the local server name.
  10. When saving the file, you might have to save it to a writable folder, and then move it to C:\Windows\System32\msmq\Mapping.
  11. Then restart the Message Queuing service on each Session Recording server.

Authorization

  1. Note: authorization is configured separately on each load balanced Session Recording server.
  2. From the Start Menu, run Session Recording Authorization Console.
  3. In the PolicyAdministrator role, add your Citrix Admins group.
  4. If you use Director to configure Session Recording, add the Director users to the PolicyAdministrator role.
  5. In the Player role, add users that can view the recordings.
  6. By default, nobody can see the Administration Log. Add auditing users to the LoggingReader role.
  7. Repeat the authorization configuration on additional load balanced Session Recording servers.
  8. Session Recording has a Session Recording Administrator Logging feature, which opens a webpage to https://SR01.corp.local/SessionRecordingLoggingWebApplication/. Only members of the LoggingReader role can see the data.

Policies

  1. From the Start Menu, run Session Recording Policy Console.
  2. Enter the hostname of the Session Recording server, and click OK.
  3. Only one policy can be enabled at a time. By default, no recording occurs. To enable recording, right-click one of the other two built-in policies, and click Activate Policy.
  4. Or you can create your own policy by right-clicking Recording Policies, and clicking Add New Policy.
  5. After the policy is created, right-click it, and click Add Rule.
  6. Decide if you want notification or not, and click Next.
  7. Click OK to acknowledge this message.
  8. Choose the rule criteria. You can select more than one. Session Recording has an IP Address or IP Range rule.
  9. Then click the links on the bottom specify the groups, applications, servers, and/or IP range for the rule. Click Next.

  10. Give the rule a name, and click Finish.
  11.  Continue adding rules.
  12. When done creating rules, right-click the policy, and click Activate Policy.
  13. You can also rename the policy you created.

Session Recording Agent

Install the Agent on the VDAs. Platinum Licensing is required.

  1. On the Master VDA, go to the downloaded XenApp/XenDesktop 7.14 ISO, and run AutoSelect.exe.
  2. If you see the Manage your delivery screen, click either XenApp or XenDesktop. The only difference is the product name shown in the installers.
  3. On the bottom right, click the Session Recording box.
  4. In the Licensing Agreement page, change the selection to I have read, understand, and accept the terms, and click Next.
  5. In the Core Components page, uncheck everything except Session Recording Agent. Click Next.
  6. In the Agent page, enter the FQDN of the Session Recording server (or load balanced FQDN), click Test connection, and click Next.
  7. In the Summary page, click Install.
  8. In the Finish page, click Finish.
  9. Agent Installation can also be automated. See Automating installations at Citrix Docs.
  10. In the Start Menu is Session Recording Agent Properties.
  11. You can enable or disable session recording on this Agent.
  12. For MCS and PVS VDAs, see the GenRandomQMID.ps1 script at Install, upgrade, and uninstall Session Recording at Citrix Docs.
  13. Session Recording Agent might cause MCS Image Prep to fail. To work around this, set the Citrix Session Recording Agent service to Automatic (Delayed Start). Source = Todd Dunwoodie at Session Recording causes Image preparation finalization Failed error at Citrix Discussions.

Session Recording Player

Install the Player on any Windows 7 through Windows 10 desktop machine. 32-bit color depth is required. Because of the graphics requirements, don’t run the Player as a published application.

  1. Go to the downloaded XenApp/XenDesktop 7.14 ISO, and run AutoSelect.exe.
  2. If you see the Manage your delivery screen, click either XenApp or XenDesktop. The only difference is the product name shown in the installers.
  3. On the bottom right, click the Session Recording box.
  4. In the Licensing Agreement page, change the selection to I have read, understand, and accept the terms, and click Next.
  5. In the Core Components page, uncheck everything except Session Recording Player. Click Next.
  6. In the Summary page, click Install.
  7. In the Finish page, click Finish.
  8. From the Start Menu, run the Session Recording Player.
  9. Open the Tools menu, and click Options.
  10. On the Connections tab, click Add.
  11. Enter the FQDN of the Session Recording server (or load balanced FQDN).
  12. On the Cache tab you can adjust the client-side cache size. Click OK.
  13. Use the Search box to find recordings.
  14. Or you can go to Tools > Advanced Search.

  15. Once you find a recording, double-click it to play it.
  16. If you see a message about Citrix Client version incompatibility, see CTX206145 Error: “The Session Recording Player Cannot Play Back This File” to edit the Player’s SsRecPlayer.exe.config file to accept the newer version.  💡
  17. To skip spaces where no action occurred, open the Play menu, and click Fast Review Mode.
  18. You can add bookmarks by right-clicking in the viewer pane. Then you can skip to a bookmark by clicking the bookmark in the Events and Bookmarks pane.

Director Integration

  1. On the Director server, run command prompt elevated (as Administrator).
  2. Run C:\inetpub\wwwroot\Director\tools\DirectorConfig.exe /configsessionrecording
  3. Enter the Session Recording FQDN (or load balanced FQDN) when prompted.
  4. Enter 1 for HTTPS.
  5. Enter 443 as the port.
  6. In Director, when you view users or machines, you can change the Session Recording policy. These policy changes don’t apply until a new session is launched.
  7. If the Session Recording menu says N/A, then the Director user needs to be authorized in the Session Recording Authorization Console.

  8. If you use Director to enable or disable recording for a user or machine, rules are added to the active policy on the Session Recording server. They only take effect at next logon.

EUC Weekly Digest – May 27, 2017

Last Modified: May 27, 2017 @ 8:05 am

Here are some EUC items I found interesting last week. For more immediate updates, follow me at http://twitter.com/cstalhood.

For a list of updates at carlstalhood.com, see the Detailed Change Log.

 

Citrix

XenApp/XenDesktop

VDA

App Layering (Unidesk)

Director/Monitoring

WEM/Profile Management

StoreFront

Receiver

XenApp 6.5

NetScaler

NetScaler Gateway

XenServer

Citrix Cloud

VMware

EUC Weekly Digest – May 20, 2017

Last Modified: May 20, 2017 @ 7:07 am

Here are some EUC items I found interesting last week. For more immediate updates, follow me at http://twitter.com/cstalhood.

For a list of updates at carlstalhood.com, see the Detailed Change Log.

 

Citrix

XenApp/XenDesktop

App Layering / Unidesk

HDX

Provisioning Services

StoreFront

NetScaler

NetScaler Gateway

XenMobile

VMware

Microsoft

Other