Citrix Health Check

Last Modified: Feb 28, 2024 @ 2:29 pm

Navigation

đź’ˇ = Recently Modified

Change Log

Health Check Overview

Health Checks review an environment for configurations that might cause future problems, not necessarily existing problems. Health Checks tend to focus on non-functional qualities like the following:

  • Availability
  • Security
  • Manageability
  • User Experience
  • Performance
  • Reliability

The rest of this article is an incomplete list of health check assertions for Citrix environments.

StoreFront Load Balancing

  • Citrix connectivity infrastructure design is documented: StoreFront, Gateways, ADCs, multiple datacenters, Delivery Controllers, SQL, etc.
    • Separate test Citrix environment has identical architecture as production: multiple data centers, high availability for all components, etc. – enables testing changes, including HA/DR changes, before performing those changes in production. Some upgrades are performed differently for HA/DR than for single components.
  • The FQDN that users use to access Citrix (e.g. https://citrix.company.com) resolves to a Load Balancing VIP, not a single server.
    • The FQDN automatically fails over (e.g. GSLB) to a VIP in a different data center if the primary data center is down.
  • The certificate for the SSL Load Balancing VIP is valid: trusted, not expired, matches FQDN, no errors in Chrome, etc.
    • Someone is responsible for ensuring the certificate is not expired and receives pending certificate expiration notifications.
  • The Load Balancing VIP sends SSL traffic to two or more StoreFront servers in the local data center – for redundancy.
    • The ADC-to-StoreFront server communication is SSL/TLS encrypted, not HTTP – this traffic contains user credentials.
  • The ADC monitor for the StoreFront servers is type STOREFRONT, or does a GET request to /Citrix/Store/discovery – other monitors might not detect stopped services.
  • X-Forwarded-For is configured in the Load Balancing Services (or Service Group) for Client IP header insertion.
  • Load balancing persistence is SOURCEIP with a timeout that is as long as the Receiver for Web timeout – COOKIEINSERT doesn’t work on all client devices.

StoreFront Servers

  • If the StoreFront servers are on the same hypervisor cluster, then anti-affinity is configured to keep them on separate hypervisor hosts.
  • StoreFront server VMs do no have any old snapshots – slows down performance, and consumes disk space.
  • StoreFront version is updated to resolve Security vulnerability as of Jan 16, 2024.
    • Upgrades are performed in a separate test environment that has identical architecture as production before the updates are performed in production.
  • StoreFront server group have latency of less than 40 ms (with subscriptions disabled) or less than 3 ms (with subscriptions enabled) between each member.
  • StoreFront configuration is propagated to other servers in the StoreFront Server Group.
  • OS, Patch level and VM Configuration of all StoreFront Server Group members are identical.
  • No recent unknown errors in Event Viewer at Applications and Services -> Citrix Delivery Services.
  • StoreFront Base URL is an https URL, not http. The FQDN resolves to the Load Balancing VIP, not a single server.
  • SSL certificates are installed on each StoreFront server and bound to IIS Default Web site. The SSL certificates are not expired.
  • C:\Users does not contain a bunch of user profiles. Delprof2.exe should be scheduled to delete these profiles – caused by users changing expired passwords.
  • If HTML5 Workspace app is enabled, then HTML5 Receiver is up to date – New versions are released at least monthly.
  • If Workspace app is stored on StoreFront servers, then the local Workspace apps in C:\Program Files\Citrix\Receiver StoreFront\Receiver Clients is current.
  • If Favorites are enabled, then Favorites (aka Subscriptions) are replicated to a StoreFront Server Group in a different data center.
  • If Federated Authentication Service (FAS), then multiple FAS servers configured through Group Policy.
    • FAS Servers are the same version as StoreFront.
    • If the FAS servers are on the same hypervisor cluster, then anti-affinity is configured to keep them on separate hypervisor hosts.
    • FAS Get-FasAuthorizationCertificate shows registration certificate is OK and not MaintenanceDue.
    • FAS group policy .admx template is up to date in SYSVOL.
    • FAS User Rules restricts usage to just some StoreFront servers, some VDAs, and some users – not all
    • Auto-enrollment is not enabled on the FAS certificate templates..
    • The Certificate Authority database is not excessively large.
    • For CA that is dedicated to only FAS, only Citrix templates. Other templates (e.g. Domain Controller) removed.
  • Task Manager shows sufficient CPU and Memory for each StoreFront server.
  • There’s sufficient free disk space – check C:\inetpub\logs
  • A monitoring tool alerts administrators of any StoreFront performance metric issue, availability issue (e.g. service stopped), and Event Log errors.
  • Logon Simulator runs periodically to verify that StoreFront is functional.
  • StoreFront Disaster Recovery procedure is documented and tested.

StoreFront Configuration

  • Only one store. Or every store but one is hidden – if multiple stores are advertised, then Workspace app will prompt the user to select a store.
  • Each Delivery Controller farm is configured with two or more Delivery Controllers – for redundancy.
    • Or Delivery Controller XML can be load balanced. If load balanced, then ADC monitor is of type CITRIX-XD-DDC – so ADC can detect Local Host Cache outages.
    • Prefer separate farms per data center instead of stretched single farms (with zones) across multiple data centers.
  • Transport Type for Delivery Controllers is https, not http – this traffic includes user credentials.
  • Receiver for Web Session Timeout is not too short for user experience or too long for security.
  • Citrix Gateway configuration in StoreFront console:
    • The STAs in StoreFront match the STAs configured on the Citrix Gateway Virtual Server on the ADC appliances.
    • Session Reliability is enabled.
    • Callback URL is only needed for SmartAccess and Citrix FAS – Callback URL should be removed if it’s not needed.
    • Internal Beacon is only reachable internally.
    • External Beacon does not include citrix.com – ping.citrix.com is OK
  • HDX Optimal Routing can send ICA traffic through the Citrix Gateway that is closest to the VDA (i.e. farm).

Delivery Controllers

  • In CVAD 1906+, Citrix Scout Health Check does not show any errors or warnings.
  • If the Delivery Controller servers are on the same hypervisor cluster, then ensure anti-affinity is configured to keep them on separate hypervisor hosts.
  • Delivery Controller VMs do not have any old snapshots.
  • Delivery Controller version is an LTSR Cumulative Update version (e.g., 1912 CU7), or the two latest Current Release versions (e.g., 2305). No other versions are supported – Citrix Product Matrix shows support dates.
    • Delivery Controller Upgrades are performed in a separate test environment before performed in production.
    • Citrix upgrades or updates are performed around twice per year.
  • Run Get-BrokerDBConnection to see the SQL connection string. No SQL Express. For AlwaysOn Availability Group (AAG):
    • SQL String points to AAG Listener, not single node.
    • All AAG SQL nodes in one data center. For multiple data centers, prefer separate farms in each data center with local SQL.
    • SQL String contains MultiSubnetFailover.
    • Each SQL server has SQL Logins for all Delivery Controllers – SQL Logins usually don’t replicate between SQL nodes.
    • Prefer Synchronous Commit with Automatic Failover over Asynchronous replication.
    • AAG Dashboard in SQL Studio does not show any issues.
  • SQL databases for Site, Monitoring, and Log are separate, not combined.
  • SQL databases for Citrix are not excessively large. Database Backup tool is truncating the database logs.
  • SQL Servers have sufficient CPU/Memory to handle the Citrix SQL traffic. Monitoring tool alerts SQL DBAs of any performance or availability issues.
  • SQL Server version is supported by Citrix. https://support.citrix.com/article/CTX114501
  • Local Host Cache is enabled on the Delivery Controllers. Run Get-BrokerSite to confirm.
    • Delivery Controller virtual CPU allocation is 1 CPU socket with multiple cores – SQL Express LocalDB for Local Host Cache only runs on a single socket (up to four cores).
    • How are non-persistent virtual desktops handled during SQL outage?
    • In CVAD 1912 and newer, LocalDB is upgraded to SQL Server Express LocalDB 2017
  • SQL Disaster Recovery plan is documented and tested.
  • SSL Certificates are installed on Delivery Controllers to encrypt XML traffic from StoreFront.
    • SSL certificates are bound to IIS Default Web Site, or netsh http sslcert to perform binding. IIS Binding does not include hostname.
    • SSL certificate not expired.
  • Trust XML Requests is enabled for pass-through authentication, SmartAccess, FAS, etc. Run Get-BrokerSite to confirm.
  • Task Manager shows sufficient CPU and Memory for each Delivery Controller server.
  • A monitoring tool alerts administrators of any Delivery Controller performance metric issue, availability issue (e.g. service stopped), and Event Log errors.

Citrix Studio

  • Citrix Studio consoles installed on administrator machines are the same version as the Delivery Controllers.
  • Customer Experience Improvement Program is disabled in Citrix Studio > Configuration node > Product Support tab.
  • Licensing Model/Edition matches what you actually own.
  • Citrix Studio Administrators are periodically audited to ensure only authorized users are granted Studio access.
    • Administrators are added as Active Directory Groups, not individual users.
  • Applications are published to Active Directory Groups, not individual users.
  • If App Groups, applications are published to only App Groups. Applications are not published to both App Groups and Delivery Groups.
  • Hypervisor connection uses a service account, not an admin account.
    • Hypervisor permissions for the service account are the minimum permissions required (custom role), not full hypervisor administrator.
  • Each Hosting Resource only has one datastore selected, not multiple datastores – Citrix MCS does not have a datastore “Rebalance” option. More datastores means more copies of master image snapshots, which means longer time to push out an updated Master image.
  • MCS Memory Caching Option is not enabled unless VDA 1903 or newer – older VDA, including 7.15 VDA, has poor performing MCSIO driver.
  • If MCS, VDA restarts are not performed in hypervisor since hypervisor does not cause MCS reset like Studio restart does.
  • StoreFront URLs are not assigned to Delivery Groups using Studio – instead use Workspace app group policy to assign StoreFront URL.

Citrix License Server

  • Citrix License Server is version 11.17.2.0 build 40000 or newer to resolve Apache vulnerabilities.
  • Citrix License Server is uploading telemetry every 90 days as required by Citrix. Check c:\Program Files (x86)\Citrix\Licensing\LS\resource\usage\last_compliance_upload
  • The licenses installed on Citrix License Server match the purchased licenses at https://citrix.com/account – some Citrix License Servers have too many licenses installed.
  • If multiple Citrix License Servers, installed license count across all License Servers does not exceed the purchased licenses shown at https://citrix.com/account
  • Administrators are not frequently clearing named user license assignments to simulate concurrent licensing – license assignments should only be cleared when the user permanently no longer uses Citrix.
  • Subscription Advantage dates are not expired – if expired, download new license files and install them.
  • Usage and Statistics tab is configured as intended in the Citrix Licensing Manager gear icon.
  • Citrix License Server Disaster Recovery procedure is documented and tested.

Remote Desktop Services (RDS) Licensing

  • If RDSH VDAs, two or more activated RDS Licensing servers.
  • RDS Licensing Server operating system version matches (or newer) the RDSH VDA operating system version – e.g. Windows 2019 RDS Licensing for Windows 2019 RDSH servers. Windows 2019 RDS Licensing also works with Windows 2016 RDSH servers.
  • In RD Licensing Manager, right-click server -> Review Configuration shows green checkmarks.
  • The combined licenses installed on all RDS license servers do not exceed the purchased licenses.
  • On RDSH VDAs, HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\LicenseServers shows two servers.
    • LicensingMode = 4, which is Per User mode, which is not enforced.

Citrix Director

  • Director version matches the Delivery Controller version.
  • If multiple Director servers:
    • Hypervisor Anti-affinity is configured.
    • Director Saved Filters are relocated to a UNC path instead of local C: drive.
  • Director server VMs do not have old snapshots – slows down servers, and increases disk space.
  • SSL certificate is installed on Director servers.
    • Admins and Support teams always use https to access Director. IIS or load balancer redirects from http to https.
  • Director website is SSL load balanced.
    • SSL protocol, not http, between load balancer and Director servers – this traffic contains user credentials.
  • Director logon page auto-populates the domain name – for user convenience. Might have to reconfigure the domain name after every Director upgrade.
  • Citrix Policy Settings for Director:
    • Enable Process monitoring is enabled.
    • Enable monitoring of application failures is enabled.
  • If Citrix Virtual Apps and Desktops (CVAD) is Premium Edition:
    • Director Alerts are configured to email CVAD administrators.
    • Citrix ADM HDX Insight is integrated with Director. HTTPS protocol, not HTTP.
    • Probes are configured – Probe Agent version matches the Director version.
  • Help Desk knows how to use Citrix Director to support users.
  • Average logon durations are not excessive.
  • Repetitive issues (e.g. profile resets) are analyzed for root cause analysis and future prevention.

VDAs

  • Catalog design is documented – storage design, network design, multiple datacenters design, recovery design, etc.
  • VDA version matches the Delivery Controller version.
  • VDA Subnets are added to Active Directory Sites & Services.
    • Check LOGONSERVER variable after logon to confirm correct Domain Controller.
  • DHCP is highly available. VDA IP Subnet router forwards DHCP requests to more than one DHCP server. DHCP scope is replicated to more than one DHCP server.
    • DHCP Scope has sufficient address availability for VDAs.
  • DNS Reverse Lookup Zone with PTR records for the Virtual Apps and Desktops machines.
  • If KMS, slmgr.vbs /dlv shows a unique KMS CMID for each VDA machine – another option is Active Directory-based activation.
  • If persistent (dedicated) Catalogs:
    • The VDA version matches the Delivery Controller version – VDA updates should be automated (e.g. SCCM).
    • Dedicated Catalogs are created as Full Clones – Fast Clones cannot be moved to different storage or different hypervisor cluster.
    • Persistent desktops are backed up, replicated, etc. Recovery process is documented and tested.
    • Persistent desktop provisioning process is automated, preferably from a self-service portal.
  • No Personal vDisk – User Layers instead
  • No User Layers – slows down logons, and not all apps work – prefer Persistent Desktops instead.
    • User Layers are backed up, and restore process is documented and tested.
    • User Layers are stored on a clustered file server that can handle failover of always-open VHD files (e.g. Windows File Share with Continuous Availability) – Replication won’t help with file server outage and already open User Layers
  • Multiple department-specific master images instead of a single monolithic image – during user logon, monolithic images need to be dynamically customized for user requirements, which slows down logons.
    • No double-hop – slows down logons and increases complexity since double hop requires Workspace app and icon management on the first-hop VDA machine – prefer master images with every application installed locally instead of double-hop to published applications.
    • No Shortcut visibility management – slows down logons
    • No Elastic Layering – slows down logons
    • No App-V – slows down logons, and slows down machine performance
    • Master Image update process is automated – e.g. SCCM can push updates to master images
  • Catalogs are upgraded to latest Catalog version available.
  • VDA registrations are somewhat evenly distributed across the Delivery Controllers.
  • ListOfDDCs registry value on VDAs has two or more Delivery Controllers.
  • Daily Health Check report shows registration status and maintenance mode status of every VDA machine.
  • RDSH Load Index Policy has not been modified from the default. CPU Metric is too volatile, and can cause a Denial of Service and uneven distribution of sessions. Current Load Index values should be almost the same on every RDSH VDA and not be anywhere near 10000.
  • In-guest monitoring agent shows VDA memory usage. Allocated VM Memory matches or exceeds memory Committed Bytes – Hypervisor monitoring can’t show actual VM memory usage.
  • RDSH VDAs are periodically restarted – net statistics workstation or net server statistics shows uptime.
    • In CVAD 1909+, MaxDelayMins is configured in Get-BrokerRebootScheduleV2.
  • For EDT protocol, MtuDiscovery is enabled on the VDAs. MtuDiscovery requires VDAs version 1912 and newer.
  • If Cloud-hosting of VDAs, PowerScale controls VDA power management.

VDAs – Hypervisor Hardware Clusters

  • Desktop VDAs are in their own hypervisor cluster that does not contain any Server virtual machines – avoids Windows Server licensing.
    • Hypervisor clusters with Windows Servers have proper Windows Server licensing.
  • Hypervisor admins don’t perform any hypervisor updates without first reviewing Citrix’s Supported Hypervisors article.
  • VDA vCenter is separate from non-VDA vCenter – allows non-VDA vCenter to be upgraded without affecting Citrix.
  • Hypervisor performance is monitored and alerted: CPU contention (aka CPU Ready Percentage), disk latency, CPU Usage, etc.
  • Capacity planning tool warns admins when more hypervisor hardware is needed.
  • vSphere clusters have N+1 or N+2 extra capacity for redundancy.
  • HA and DRS are enabled on vSphere cluster according to design – not all designs use these features
  • CPU and Memory consumption are evenly distributed across the hypervisor cluster
  • If VMFS6 datastores, vSphere 6.7 Update 3 is installed – see release notes
  • NTP is configured and running on hypervisor hosts.
  • Hypervisor hosts have High performance BIOS settings.
  • In larger environments, dedicated VLAN(s) for VDAs – not shared with non-Citrix workloads
    • MCS and PVS require DHCP
  • Network Uplinks are redundant and have sufficient capacity
    • ESXi Management/Vmotion/Storage traffic are separate VLANs from the VDA VLANs
    • Storage multipathing is functioning
  • NVIDIA vGPU software is current on hypervisor host and virtual machines. – vGPU Manager 11.0+ supports guest driver version one major version back (e.g., 10.0) – February 2024 security update
    • The newest hypervisors can vMotion GPU-configured virtual machines – vgpu.hotmigrate configured in vCenter Advanced Settings. DRS set to Manual or Partially Automated.
    • NVIDIA in-guest vGPU Driver is installed before the VDA is installed – otherwise HDX 3D Pro will not work.
    • ESXi Host Graphics Settings set to Shared Direct and spread across GPUs – Host GPUs set to Shared Direct.
    • NVIDIA license servers are redundant (failover support), or in the cloud.

VDAs – Virtual Machine Hardware (vSphere)

  • Network Interface type is VMXNET3, not E1000.
  • devices.hotplug=false is configured in Virtual Machine Configuration Settings.
  • If disk space is a concern, virtual machine memory is reserved to reduce .vswp file size.
  • If Citrix App Layering:
    • Paravirtual controller is not added.
    • Boot firmware is BIOS, not EFI.
  • Windows 10 version is supported by Citrix VDA version, and supported by App Layering version.
    • Windows 11 is supported with VDA 2109 and newer. It is not supported by VDA 1912.
  • VMware Tools version is current.

VDAs – Master Image Build

  • Master Image build process is documented.
  • Master Image virtual machine was built from scratch – not converted from a physical machine.
  • Security scan of the VDA Master Images shows compliance with enterprise security requirements.
  • VDA version resolves vulnerability – 2305, 2203 CU3, or 1912 CU7
  • Master Image updates:
    • Master Image maintenance is automated – e.g., SCCM can push updates to Master Images. A script can push Master Images to Catalogs.
    • Software Deployment team notifies the Master Image maintainers when applications or Windows require an update.
    • Master Image is sealed before shutdown – e.g., antivirus is generalized, SCCM Client is generalized – sealing should be scripted – Base Image Script Framework (BIS-F) can automate this
    • Master Image updates are tested before deployed to production. QA testing. Canary testing.
    • Master Image snapshots are deleted after a period of time.
  • Profile Management is patched to resolve Local privilege escalation vulnerability – 2106 Hotfix 1, 1912 CU3 Hotfix 1, or 7.15 CU7 Hotfix 1. 1912 CU4 includes the fix.
  • Antivirus is installed. Antivirus is optimized for non-persistent machines (aka VDI).
  • Other IT agents (e.g., software auditing, SCCM Agent) are optimized for non-persistent machines.
  • Local Groups:
    • Administrators group does not contain any non-administrators.
    • Direct Access Users group only contains authorized RDP users.
  • Citrix Optimizer or similar has removed Windows 10 Store Apps.
  • Windows Default profile was not modified – instead use group policy to control Windows appearance.
  • Windows Updates are current (i.e., last install date is within the last 60 days).
  • C: drive permissions are changed so Users can’t create folders on root of C: drive.
  • Power management is set to High Performance with no sleep timers.
  • If Citrix Provisioning:
    • Pagefile is shrunk so it fits on PVS cache disk – there’s no need to move the pagefile since PVS will move it for you. Just make sure it’s small.
    • Event Logs are moved to PVS cache disk.
  • Customer Experience Improvement Program is disabled in VDA registry.
  • FSLogix is a recent version – FSLogix version 2.9.7979.62170 resolves a security vulnerability in Cloud Cache.
  • Office 365 Shared Computer Activation is enabled.
    • FSLogix is implemented for Outlook search roaming.
  • Microsoft Teams is installed using machine-wide installer.
    • Microsoft Teams machine-wide installation is periodically manually updated – there’s no auto-update.
    • Teams cache folders excluded from roaming profiles.
  • For OneDrive Files On-demand, is only installed on Windows Server 2019 and newer, or Windows 10 1709 and newer
    • OneDrive is installed using machine-wide installer – check C:\Program Files (x86)\OneDrive
    • FSLogix saves OneDrive cache.

Citrix App Layering

  • Prefer automated (e.g. SCCM) Master Image updates over manual App Layering layer updates – if SCCM is mature, then there’s no need for App Layering.
  • Prefer SCCM-managed dedicated desktops over User Layers – SCCM is a known technology. User Layers are proprietary to Citrix and might not support every application.
  • Enterprise Layer Manager (ELM) version is current – ELM updates are required to support newer Citrix Virtual Apps and Desktops (CVAD) and newer Windows 10. There’s no LTSR version of ELM.
  • Citrix Provisioning Agent version matches the ELM version.
  • Directory Junction Bind account is a service account, not a regular user whose password expires.
    • LDAP is Secure (Use SSL).
  • Administrator role membership is periodically audited to ensure only authorized users are granted access.
  • ELM is backed up. Or layers are periodically exported from ELM.
  • Group Policy controls membership of local groups in VDA machines – e.g. add Domain Admins to local Administrators group.
  • Antivirus is configured properly for Layering.
  • Hypervisor Connector uses a service account with limited permissions.
  • Connector cache is enabled to speed up layering operations.
  • Offload Compositing is enabled in the Connectors.
  • File servers hosting Elastic Layers and User Layers are monitored for performance issues and capacity planning.
  • User Layers are backed up, replicated, etc.

Citrix Provisioning

Provisioning Servers:

  • Provisioning Servers version matches the Delivery Controller version.
  • Multiple Provisioning Servers for High Availability.
    • Hypervisor Anti-affinity is configured.
  • Sufficient RAM for vDisk caching in memory – around 2-3 GB of memory per active vDisk.
  • Only one NIC per Provisioning Server – simplifies the configuration.
  • Server Bootstrap has multiple Provisioning Servers listed.
  • Threads times Ports are sufficient for the number of target devices.
  • vDisk Boot Menu is disabled in the registry – enables maintenance mode Target Devices to automatically boot from maintenance mode vDisks.
  • Antivirus has exclusions for Citrix Provisioning.
  • Provisioning Server performance metrics are monitored and alerted.
    • NIC throughput is not saturated.

Provisioning Farm Properties:

  • Offline database is enabled.
  • Auditing is enabled.
  • Administrators list only contains authorized administrators, preferably from an Active Directory Group.
  • Customer Experience Improvement Program is disabled.
  • For AlwaysOn Availability Group, MultiSubnetFailover is configured in the database connection string.

vDisks:

  • If local storage, vDisk files are identical on all Provisioning Servers.
  • vDisk files are VHDX, not VHD – faster version merging.
  • vDisks are sized dynamic, not fixed – Saves disk space. Standard Mode vDisks don’t grow so no performance impact.
  • vDisk files are defragmented.
  • vDisk files are backed up.
  • vDisk updates are automated.

Target Devices:

  • Target Device Boot Method is highly available – Target Devices on same subnet Provisioning Servers. Or DHCP Option 66 with TFTP Load Balancing. Or Boot ISO/Boot Partition has multiple Provisioning Server addresses.
    • DHCP is highly available. Subnet’s router forwards DHCP requests to multiple DHCP servers. Replicated DHCP scope.
    • Use PXEChecker to verify multiple TFTP responses.
  • vDisk Write cache is configured for Target Device RAM with overflow to disk – health check script should periodically verify this.
  • WriteCache folders on Provisioning Servers are empty – no server-side caching.
  • If KMS, slmgr.vbs /dlv shows a unique KMS CMID for each Target Device machine – another option is Active Directory-based activation.
  • Target Devices are evenly distributed across multiple Provisioning servers – ensures that High Availability is working correctly – stop Stream Service to confirm HA
  • System Reserved Partition is removed from inside vDisk.
  • VMware Tools in Target Devices (vDisks) is up to date.
  • Target Device Software version matches the Citrix Provisioning version.
  • Target Device status shows low number of retries.

Group Policies and Active Directory

  • VDAs are placed in VDA-only OUs, no users – group policies apply to VDAs without affecting physical endpoints.
    • Separate OUs per Delivery Group – different group policies apply to different Delivery Groups.
  • Master Images are located in VDA OUs – computer-level GPO settings apply to the Master Images to avoid GPO timing issues on linked clones.
  • Block Inheritance OUs and Enforced GPOs are minimized.
  • .admx templates in SYSVOL > PolicyDefinitions are current – Windows 10 templates, Office templates, Citrix templates, etc.
  • Group Policy Loopback Processing Mode is enabled.
  • Duplicate, conflicting GPO settings are minimized – e.g. Group Policy Loopback Processing Mode is sometimes enabled in several GPOs.
    • Run Group Policy Results to show the actual GPO settings that applied to a specific session – compare with design
  • Lockdown GPO applies to non-administrators that log into VDA machines. Lockdown GPO doesn’t apply to administrators.
  • Remote Desktop Session Host (RDSH) session timeouts (idle, disconnect) are configured in a Microsoft GPO.
  • AppLocker or similar prevents users from running unauthorized executables (e.g. ransomware).
  • Initial application configuration is automated using group policy – e.g. auto configure application database connections, remove first time usage prompts.
  • Group Policy changes are tested in separate Test GPOs and separate Test VDAs before applying to production.
  • Monitoring tool shows group policy processing duration during logon.

Citrix Policies

  • Citrix Policies are configured in a Group Policy Object, not in Citrix Studio – a GPO can apply to multiple Citrix Virtual Apps and Desktops (CVAD) farms in multiple datacenters. Citrix Studio is single farm only.
    • Citrix Policies are not configured in both Citrix Studio and Group Policy – avoids confusion over which setting wins
    • If configured in Citrix Studio, and if multiple farms/sites, then Citrix Policy settings are identical in all farms/sites.
  • Citrix Group Policy Management plug-in on GPMC machines is same version included with CVAD ISO.
  • Unfiltered policy is on the bottom of the list (lowest priority) – most specific filters on top, least specific filters on bottom.
  • Client drive mapping, client clipboard, client printing, drag and drop, and client USB are disabled when connecting from external (e.g. SmartAccess) – only enabled by exception.
  • Client printing is set to Use Universal Print Driver only – avoids installing print drivers on VDA machines.
  • Audio is set to Medium quality – High Quality uses more bandwidth than Medium Quality.
  • Time zone redirection is configured in both Citrix Policy and RDSH Microsoft Group Policy.
  • For HDX Insight, ICA Round-Trip Time policy is enabled.
  • Visual quality and video codec settings are not modified from the defaults.
    • Legacy Graphics Mode is disabled.
  • Adaptive Transport (EDT) is enabled – it’s default disabled in 7.15. MTU might need to be decreased.
    • MtuDiscovery is enabled on the VDAs. MtuDiscovery requires VDAs version 1912 and newer.
  • Session Reliability is not disabled.
  • RDSH Session Timers are configured in Microsoft GPO, not Citrix Policy – Citrix Policy setting description shows if setting applies to Server OS or not.

Citrix Workspace Environment Management (WEM)

  • Prefer Group Policies over WEM – WEM requires extra infrastructure, extra learning, extra administration, and extra support. Some WEM user settings are per-machine (per configuration set) only. WEM can’t replace group policies since there’s currently no .admx support.
    • Citrix Profile Management and Microsoft Folder Redirection are configured using Microsoft Group Policy, not WEM – Group Policies are well known. WEM is proprietary to Citrix and requires WEM skills to troubleshoot.
  • WEM is within two versions of the latest – there’s no LTSR version of WEM.
    • WEM Consoles and WEM Agents match WEM Server version.
  • Multiple load balanced WEM Servers for High Availability.
    • If multiple WEM servers are on the same hypervisor cluster, then Hypervisor anti-affinity is configured for the multiple WEM servers.
    • WEM Agents point to WEM Server load balanced FQDN, not individual server.
    • WEM Console points to single WEM Server, not load balanced FQDN.
  • WEM Brokers are close the VDAs – WEM configuration can be exported/imported into WEM implementations in multiple data centers.
  • WEM Database is hosted on an AlwaysOn Availability Group or other Highly Available SQL solution.
    • SQL database is backed up. SQL database recovery is documented and tested.
  • In WEM 1909+, Infrastructure Service Enable performance tuning for Windows Communication Framework is enabled and set to the number of concurrent WEM Agents that will be connected to this one WEM server. Maximum value is 3000.
  • Antivirus exclusions are configured for Citrix WEM.
  • WEM .admx group policy template in SYSVOL > PolicyDefinitions is updated whenever WEM Servers are updated.
  • Settings are in WEM, or Group Policy, but not both – helps troubleshooting. Reduces confusion.
  • Bypass ie4uinit Check is enabled (Advanced Settings > Service Options) – for faster logons.
  • Drive mappings and printer mappings are moved to WEM and processed asynchronously (Advanced Settings > Agent Options).
  • Check Application Existence is enabled (Advanced Settings > Agent Options) – doesn’t create shortcut unless application exists
  • CPU Optimization is enabled – Memory management trades memory for disk; which is cheaper? Process exclusions might be needed.
    • In WEM 1909 and newer, CPU Spike Protection = Auto instead of Customize.
  • Fast logoff is enabled.
  • Unused action types are disabled from processing (Advanced Settings > Main Configuration) – speeds up logons.
  • Run Once enabled for Actions and scenarios that support it – speeds up logons.
  • WEM Agent Offline mode is enabled.
  • Computer startup script refreshes WEM Agent cache on each VDA reboot.
    • Script has correct Agent installation path and correct service name since they changed in 1909 and newer.
  • WEM Logs are reviewed for problems – enable debug logging. Look for Active Directory timeouts.
  • WEM Server performance is monitored for metric thresholds and future capacity issues.
  • WEM Server recovery is documented and tested.

Citrix Profile Management and Folder Redirection

  • No mandatory profiles on Windows 10 – benchmarks show slower performance.
  • Profile Management is configured in Group Policy, not Citrix Policy or Citrix WEM – Group Policy is the most reliable and most well-known option.
  • Profile file share:
    • File server is close to the VDAs – users log into VDAs that are closest to the file server (aka home site).
    • File share is highly available.
    • Caching is disabled on the file share.
    • No DFS multi-master replication. Single target only – neither Citrix nor Microsoft support merge replication.
    • Profiles are backed up and/or replicated. Recovery process is documented and tested.
    • Different profile folders for different operating system versions and/or different Delivery Groups.
    • NTFS permissions of individual user folders in the file share only grant access to the one user – no Users, no Domain Users, and no Authenticated Users.
    • Use TreeSize or similar to see profile size – adjust profile exclusions if too big.
    • Antivirus is not slowing down profile file transfer performance – time how long it takes to copy a profile folder to the local machine.
    • File servers are monitored for performance issues, including disk latency and free disk space.
  • Profile Management .admx file in SYSVOL > PolicyDefinitions matches the VDA version (or date).
  • Profile Management logs are stored on UNC share instead of local C: drive, especially if the VDAs are non-persistent.
    • Only Domain Computers have Modify permission to the Logs share – Users don’t need any permission.
  • Profile Management logs contain at least a few days of logons – if only a few minutes, then too much information is being logged and Log Settings GPO setting should be modified.
  • Profile streaming is enabled – speeds up logons.
  • Active Write Back is disabled – places extra load on file servers for not much benefit.
  • Customer Experience Improvement Program is disabled.
  • Locally cached profiles are deleted at logoff from RDSH machines that don’t reboot often.
  • No Start Menu roaming issues – might need ResetCache registry value.
  • Microsoft FSLogix is implemented for Outlook Search roaming – better than UPM’s Outlook search roaming.

Folder Redirection:

  • Folder Redirection is configured in Microsoft GPO settings, not in Citrix Profile Management settings – Microsoft GPO configuration is most reliable, most known, and can migrate existing files.
  • No AppData redirection – slows down applications.
  • “Grant the user exclusive rights” option is unchecked – allows administrators to access redirected profile folders.
  • Folder Redirection file share:
    • File share is highly available.
    • No DFS multi-master replication. Single target only – neither Citrix nor Microsoft support merge replication.
    • Redirected Folders are backed up and/or replicated. Recovery process is documented and tested.
    • NTFS permissions of individual user folders in the file share only grant access to the one user – no Users, no Domain Users, and no Authenticated Users.
    • Antivirus is not slowing down folder redirection performance.
    • File servers are monitored for performance issues, including disk latency and free disk space.

Home Directories:

  • File server is close to the VDAs – users log into VDAs that are closest to the file server (aka home site).
  • File share is highly available.
  • No DFS multi-master replication. Single target only – neither Citrix nor Microsoft support merge replication.
  • Home Directories are backed up and/or replicated. Recovery process is documented and tested.
  • NTFS permissions of individual user folders in the file share only grant access to the one user – no Users, no Domain Users, and no Authenticated Users.
  • Antivirus is not slowing down file transfer performance – time how long it takes to copy a Home Directory folder to the local machine.
  • File servers are monitored for performance issues, including disk latency and free disk space.

Endpoint Devices

  • Prefer Windows 10 endpoints over thin clients – thin clients don’t support all Citrix functionality (e.g. local printing, browser content redirection). ThinKiosk can lock down Windows 10 endpoints.
  • Newest VDAs and newest Workspace apps have better WAN performance than LTSR 7.15.
  • Browser Content Redirection offloads video (e.g. YouTube) from VDAs to endpoint – reduces CPU consumption in the data center.
  • Workspace app is periodically (e.g., twice per year) updated by endpoint management team. 
  • Workspace app (aka Receiver) ADMX templates in SYSVOL > PolicyDefinitions are current.
  • Group Policy adds StoreFront URL to Local Intranet zone.
  • Group Policy pushes StoreFront URL to Workspace app – so users don’t have to enter the URL.
  • Pass-through authentication is enabled for internal PCs – SSON Configuration Checker can verify proper configuration.
  • HKCU\Software\Citrix\Dazzle\Sites\store\type shows DS, not PNA – store added as Delivery Services (StoreFront), not PNAgent (legacy).
  • Internal Beacon at HKEY_CURRENT_USER\SOFTWARE\Citrix\Receiver\SR\Store\#\Beacons\Internal\Addr0 is internally reachable only – not reachable externally.
  • External Beacon at HKEY_CURRENT_USER\SOFTWARE\Citrix\Receiver\SR\Store\#\Beacons\External does not include citrix.com or ping.citrix.com.
  • EDT protocol (aka Adaptive Transport) is enabled. Director shows HDX protocol as UDP – Remote Display Analyzer can analyze problems with the graphics/codec.
  • HDX Insight: Newest VDAs and newest Workspace app have less AppFlow CPU impact on ADC than LTSR 7.15 VDAs.
  • Google Chrome detects Workspace app properly, especially through Gateway – requires Gateway ADC to able to resolve StoreFront Base URL to StoreFront IP
    • Chrome 77+ has receiver://* added to URL whitelist so the user isn’t prompted to open Workspace app

Citrix NetScaler ADC

  • NetScaler ADC Admins have subscribed to Citrix Security Bulletins at https://support.citrix.com/user/alerts
  • NetScaler ADC firmware build is patched for vulnerabilities as of Jan 16, 2024.
  • NetScaler ADC firmware updates are tested on separate test ADC appliances before performed in production. Test ADC appliances have test VIPs – application owners can test their VIPs on test ADC before firmware is upgraded in production.
  • NetScaler ADC VPX on vSphere:
    • NetScaler VPX NICs are VMXNET3, not E1000.
    • NetScaler is version that supports vSphere version.
    • DRS Cluster Anti-affinity is configured for the VPX appliances in the same HA pair.
    • CPU/Memory are reserved at hypervisor. If not reserved at hypervisor, then Yield CPU is not enabled so that VPX can reserve CPU itself.
  • NetScaler ADC license does not expire any time soon – check date inside license files at /nsconfig/license
    • ADM Pooled Licensing has license alerts enabled for email notifications.
  • Physical NetScaler ADC:
    • LOM port is connected and configured.
    • LOM nsroot password is changed from the default.
    • No VLAN is connected to multiple active interfaces unless those interfaces are in a port channel.
  • ADC nsroot password is not nsroot. nsroot password is managed by Privileged Identity Management tool. Admins don’t use nsroot to login.
  • Policies are Advanced Expressions instead of Classic Expressions. (source = CTX296948)
  • Management authentication is configured for external authentication server, typically LDAP.
    • LDAP is load balanced instead of multiple LDAP Policies to individual LDAP servers – avoids premature account lockout.
    • LDAP is encrypted: LDAPS on port 636.
    • LDAP Bind account is a service account – not a regular user whose password expires.
    • LDAP Search Filter only allows ADC Admins Active Directory Group to authenticate.
  • If TACACS, firmware is 12.0 build 57 or newer to prevent TACACS Accounting from blocking AAA.
  • nsroot account has external authentication disabled.
  • No local NetScaler ADC accounts except nsroot.
  • NTP and Time Zone are configured.
  • Syslog is configured to send logs to external SIEM, especially if ADC is performing authentication.
  • SNMP Traps are sent to Citrix ADM appliance.
    • Thresholds are configured for CPU and Memory alarms.
  • Customer Experience Improvement Program (CUXIP) is disabled.
  • Recommended TCP Profile Settings are configured.
  • Drop Invalid HTTP requests is enabled in HTTP global settings.
  • Secure Access Only is enabled on all NSIPs and all management-enabled SNIPs – check both nodes of High Availability pair.
    • Management certificate has no certificate errors.
  • Networking:
    • NetScaler ADC VLANs only have one interface (or one channel) – Best Practices at Citrix Docs.
    • If Dedicated Management Network, Policy Based Routes (PBR) are configured for NSIP reply traffic and NSIP-initiated traffic.
    • Unused network interfaces are disabled.
    • ADC instance is connected to only one security zone – if connected to multiple security zones, then a firewall is bypassed.
    • Default route should be Internet facing, or a data VLAN – not NSIP VLAN.
    • Only one default route – extra default routes can come from HA pairing or hardware migration.
  • Root DNS server address “h.root-servers.net” is set to 198.97.190.53 – might be old address due to older firmware
  • Unused NetScaler ADC configurations are removed – unused server objects, unused policies, etc.
  • Citrix ADM monitors and backs up the ADC appliances.
  • ADC Dashboard shows that CPU, Memory, and Throughput have not exceeded appliance capacity or appliance licensing.
  • /var/core and /var/crash do not have recent crash dumps.

NetScaler ADC High Availability Pair

  • Firmware build is identical on both nodes.
  • Installed Licenses are identical on both nodes.
  • NTP and time zones are configured on both appliances – Configuration node shows System Time.
  • Unused interfaces are disabled.
  • HA is synchronizing without error.
  • Both HA nodes are set to ENABLED – not STAYPRIMARY and/or STAYSECONDARY.
  • Fail-safe mode is enabled.
  • “show ha node” shows heartbeats across all interfaces – no “interfaces on which heartbeats are not seen”.
  • High Availability failover has been tested, including RADIUS authentication, which might come from a different source IP.
  • Sync VLAN configured to enable ISSU on ADC 13.0+

NetScaler ADC SDX

  • LOM port is connected and configured.
    • LOM nsroot password is not nsroot.
  • No hardware problems shown on SDX SVM dashboard page.
  • SDX firmware is current – should be same or newer than the VPX firmware.
  • SDX SVM nsroot password is not nsroot. nsroot password is complex. Admins don’t use nsroot to login.
  • Management authentication is configured for external authentication server, typically LDAP.
    • LDAP is load balanced instead of multiple LDAP Policies to individual LDAP servers – avoids premature account lockout.
    • LDAP is encrypted: LDAPS on port 636.
    • LDAP Bind account is a service account – not a regular user whose password expires
      • LDAP Bind account should be a regular domain account, not a Domain Admin.
      • LDAP Bind account should be dedicated to LDAP Bind and not used for anything else.
    • LDAP Search Filter only allows ADC SDX Admins Active Directory Group to authenticate.
  • No local accounts except nsroot.
  • No certificate errors when accessing SVM management using htttps.
    • HTTPS is forced in System Settings – HTTP is not allowed.
  • Multiple DNS servers are configured in Networking Configuration – initial setup only asks for one DNS server.
  • Channels are created at SDX SVM instead of inside VPX instances.
  • NTP is configured and enabled.
  • Syslog is configured.
  • SNMP traps are sent to Citrix ADM.
  • The number of SDX instance licenses installed matches what’s owned at https://citrix.com/account
  • SDX SVM Backups are configured with External Transfer – or download periodically – or ADM.
  • VPX Instances:
    • Platinum Edition license is assigned to instances.
    • SSL Chips are assigned to VPX instances.
    • All SDX hardware is allocated to VPX instances – If not, why not?
    • Production instances typically have Dedicated CPU cores. Test/Dev instances typically have Shared CPU.
    • VLANs are specified inside VPX instances instead of at instance properties on SDX Management Service – avoids reboot if you need to change the VLAN configuration.
    • No VMACs in instance interface settings.

NetScaler ADC Load Balancing and SSL

  • Load Balancing configurations are documented.
  • Monitors do more than just telnet – e.g. LDAP monitor performs LDAP query.
    • LDAP monitor bind account uses service account, not domain admin.
    • LDAP monitor is filtered to cn=builtin – to reduce result size.
    • RADIUS monitor looks for response code 2 or 3.
  • If multiple Virtual Servers for multiple ports on the same VIP, configure Persistency Group – e.g. Horizon Load Balancing.
  • Rewrite policies remove web server header information (Server, X-Powered-By, etc.)
  • SSL Labs SSL Server Test shows A or A+ grade for all Internet-facing SSL vServers.
  • Redirect Virtual Servers are UP (Responder method) instead of DOWN (Backup URL method).
  • Custom (non-default) ciphers are bound to every SSL Virtual Server – see Citrix Networking SSL / TLS Best Practices.
  • SSL v3 and TLS v1.0 are disabled on every SSL Virtual Server.
  • SSL Renegotiation is set to NONSECURE: configured globally, or in SSL Profiles (including default profile).
  • Root certificate is not linked to intermediate certificate.
  • Certificates are not expired.
  • SSL Services do not have “-TLS11 disabled” or “-TLS12 disabled” – might be disabled from older firmware.
  • ADM alerts ADC administrators when certificates are soon to expire.
  • ADM Analytics is enabled for the HTTP Virtual Servers.
    • ADM Web Insight is viewed.
  • Bot Management (13.0 build 41+) and/or Web App Firewall are configured if ADC Premium Edition.
    • ADM Security Insight is enabled and viewed.

Citrix NetScaler ADM

  • NetScaler ADM exists and manages all ADC appliances.
    • Prompt credentials for instance login is enabled in ADM System Settings – if ADM does Single Sign-on to instances, then all instance changes are logged as nsroot instead of ADM user.
  • NetScaler ADM firmware version is current.
    • ADM Agents and DR nodes have same firmware version as ADM – check /var/mps/log/install_state
  • Two DNS servers are configured in ADM Network Configuration – initial setup only asks for one DNS server.
  • Two NetScaler ADM appliances in High Availability mode with Floating IP – provides redundancy.
  • Every High Availability node and DR node has same disk size.
  • NetScaler ADM nsroot password is not nsroot. nsroot password is complex. Admins don’t use nsroot to login.
  • NetScaler ADM Agent nsrecover password has been changed from the default.
  • Management authentication is configured for external authentication server, typically LDAP.
    • LDAP is load balanced instead of multiple LDAP Policies to individual LDAP servers – avoids premature account lockout.
    • LDAP is encrypted: LDAPS on port 636.
    • LDAP Bind account is a service account – not a regular user whose password expires.
    • LDAP Search Filter only allows ADM Admins Active Directory Group to authenticate.
  • No local accounts except nsroot.
  • No certificate errors when accessing ADM management using htttps.
    • HTTPS is forced in System Settings – HTTP is not allowed
  • Time zone is configured.
  • NTP is configured and enabled.
  • NetScaler ADM Database is not full. Sufficient disk space.
  • Sufficient ADM CPU/Memory – verify at System > Statistics or System > Deployment.
  • All features enabled – verify at System > Administration > Disable or enable features
  • SSL Dashboard alert notifications are enabled to warn of upcoming certificate expiration.
  • Tasks page notifications are enabled
  • Event Rules are configured to email ADC administrators of Critical or Major ADC alarms.
  • NetScaler ADC Instance Backup settings on NetScaler ADM:
    • Number of NetScaler ADC instance backups retained is sufficient for restoring from history.
    • NetScaler ADC Backups are transferred to external SFTP, SCP, or FTP server.
    • NetScaler ADC Restore process is documented and tested.
  • VIP Licensing:
    • Installed license count on NetScaler ADM matches the licenses owned at https://citrix.com/account.
    • Licenses are assigned to Virtual Servers that need Analytics (e.g. HDX Insight) or Applications tab.
    • AppFlow/Insight is enabled on NetScaler Citrix Gateway and HTTP Virtual Servers.
    • TCP 5563 opened from SNIP to ADM for Metrics Collector.
    • License expiration notifications are enabled.
  • Private IP Blocks are configured for geo mapping of ADC instances and Analytics sessions.
  • Analytics Thresholds are configured – e.g., ICA Latency threshold.
  • Session Reliability on HA Failover is enabled on ADC instances in ICA Parameters – if not enabled, then sessions drop on failover.
  • ADM HDX Insight is linked to Director Premium Edition using https protocol, not http protocol.

NetScaler Citrix Gateway ICA Proxy

NetScaler Citrix Gateway Virtual Server:

  • SSL Labs SSL Server Test shows A or A+ when it scans the Gateway external FQDN.
  • If ICA Only is unchecked on the Gateway Virtual Server, then System > Licenses shows sufficient Maximum Citrix Gateway Users Allowed.
  • NetScaler Citrix Gateway Virtual Server Maximum Users is 0, which means unlimited.
  • TCP Profile is configured with Recommended TCP Profile Settings.
  • DTLS is enabled on the Virtual Server for EDT protocol.
    • UDP ports are open on firewall from Internet and to VDAs.
    • Director Session Details shows HDX protocol as UDP.
  • ICA Connections shows port 2598 (Session Reliability enabled), not 1494.
  • NetScaler Citrix Gateway communication to StoreFront is https protocol, not http.
  • NetScaler Citrix Gateway communication to StoreFront is load balanced to multiple StoreFront servers – not a single StoreFront server.
  • STAs on NetScaler Citrix Gateway matches StoreFront configuration.
  • Policies are Advanced Expressions instead of Classic Expressions. (source = CTX296948)
  • If EPA is used for SmartAccess, then Endpoint Analysis Libraries are updated.

NetScaler Citrix Gateway Authentication:

  • Encrypted LDAP:
    • LDAP is load balanced instead of multiple LDAP Policies to individual LDAP servers – avoids premature account lockout.
    • LDAP is encrypted: LDAPS on port 636.
    • LDAP Bind account is a service account – not a regular user whose password expires.
    • LDAP Search Filter only allows authorized remote users in an Active Directory group to authenticate.
  • Two-factor authentication – RADIUS:
    • For Workspace app, password fields are swapped.
    • Both factors are required to login. Can’t bypass second factor.
    • RADIUS tested from both High Availability nodes (perform failover).
  • SAML Authentication:
    • Prefer RADIUS over SAML so that ADC will have access to the user’s password to facilitate Single Sign-on to the VDA machines.
    • If SAML response does not provide user’s password, then Federated Authentication Service (FAS) is deployed .
    • For Workspace app support of SAML, SAML is configured in nFactor (AAA), not Gateway – requires ADC 12.1 and newest Workspace app.
    • SAML iDP Signing certificate is not expired. ADC administrators know how to update the Signing certificate.
    • relaystateRule configured in SAML Action to prevent session hijack – see https://support.citrix.com/article/CTX316577
  • Native OTP:
    • OTP Active Directory attribute is encrypted.
  • nFactor login fields are encrypted.

NetScaler ADC GSLB

  • If a DNS name resolves to multiple IP addresses, then the DNS name should be GSLB-enabled for automatic failover.
  • DNS Records are delegated to two or more ADC ADNS services, usually in separate data centers.
    • NS records and SOA records are added to ADC for delegated domain names and/or delegated sub zones.
  • All NetScaler ADC nodes that have ADNS listeners for the same DNS name have identical GSLB configuration.
  • Public GSLB Services have monitors that verify remote Internet connectivity – don’t give out IP if users can’t reach it.
  • Separate NetScaler ADC appliances for public DNS and internal DNS – If both are on one appliance, then how are the DNS configurations separated?
  • RPC nodes for Metric Exchange Protocol (MEP) should have Secure enabled.
  • Firewall should only allow the MEP endpoints to communicate over 3009 – don’t open to whole Internet.
  • If Static Proximity:
    • Static Proximity database is current.
    • GSLB Services show correct geo location.
    • Custom Entries are added for internal subnets.
  • If DNS Views, DNS Views are configured on all GSLB Services – if GSLB Service doesn’t have a DNS View, then that GSLB Service might not function correctly.
  • If Active/Active GSLB load balancing, then site persistence is functioning correctly.
  • DNS security options are configured to prevent ADNS Denial of Service.

Detailed Change Log

Last Modified: Nov 9, 2021 @ 1:47 pm

This post lists all minor and major changes made to carlstalhood.com since May 2020.

NetScaler Scripting

Last Modified: Nov 7, 2020 @ 6:34 am

Navigation

đź’ˇ = Recently Updated

Changelog

  • 2019 Mar 11 – Script to Extract Configuration – rewrote the section in instructional format
  • 2018 Dec 2 – Configuration Extractor – added a nFactor visualizer
  • 2018 Nov 17 – Configuration Extractor – Out-GridView (GUI) for vServer selection
  • 2018 Sep 19 – Configuration Extractor – several fixes
  • 2018 July 4 – Configuration Extractor
    • Added “*” to select all vServers
    • Updated for 12.1 (SSL Log Profile, IP Set, Analytics Profile)
    • Extract local LB VIPs from Session Action URLs (e.g. StoreFront URL to local LB VIP)
    • Extract DNS vServers from “set vpn parameter” and Session Actions
  • 2018 Jan 4 – Configuration Extractor, Sirius’ Mark Scott added code to browse to open and save files. Added kcdaccounts to extraction.
  • 2018 Jan 3 – new Powershell-based NetScaler Configuration Extractor script

NetScaler ADC Configuration Extractor

NetScaler ADC Configuration Extractor extracts every NetScaler ADC CLI command needed to rebuild one or more Virtual Servers. Here’s how to use the script:

  1. The extraction script loads a NetScaler ADC Configuration file and parses it. To get a NetScaler ADC Configuration file:
    1. On your NetScaler ADC, go to System > Diagnostics > Running Configuration and then click the link on bottom to save text to a file.

  2. To download the extraction script, point your browser to https://github.com/cstalhood/Get-ADCVServerConfig/blob/master/Get-ADCVServerConfig.ps1, right-click the Raw button, and Save link as.
  3. Run the extraction script in PowerShell. One option is to right-click the script file and click Run with PowerShell. (note: the script doesn’t seem to work on Windows 7)
  4. Browse for the Running Configuration file that you saved from an appliance.
  5. The script will prompt you to select one or more Virtual Servers.
  6. The script then enumerates all objects linked to the chosen Virtual Servers (e.g. Responder Policies) and provides their configuration too.
  7. The script also outputs global settings that might affect the operation of the chosen Virtual Servers.
  8. The CLI output is listed in proper order. For example, create monitors before binding them to Service Groups.
  9. If the config includes an “authentication vserver”, then a nFactor Visualizer will be shown.
  10. The extracted Virtual Server CLI configuration can be used for documentation
  11. Or you can apply the outputted configuration to a different NetScaler ADC appliance:
    1. To import this output to a different NetScaler ADC, first change the IP addresses of the outputted Virtual Servers so there won’t be any IP Conflict after you import.
    2. SSH (Putty) to the other NetScaler ADC.
    3. Then simply copy the outputted lines and paste them into the SSH prompt.
    4. Alternatively, for longer output file, you can upload the output file to the other NetScaler ADC (e.g. upload to /tmp directory), and then run batch -fileName on the new NetScaler ADC while specifying the uploaded filename (e.g. /tmp/nsconfig.conf).
      • Note: the batch command requires that the input file name be in lower case only and without any spaces in the file name.

I originally attempted a dynamic extraction using complicated regular expressions, but there wasn’t enough control over the extraction and output process. The new PowerShell script explicitly enumerates specific objects, thus providing complete control over the output. For example, before binding a cipher group to a Virtual Server, the current ciphers must first be removed.

The script uses several techniques to avoid false positive matches, primarily substring matches.

Let me know what bugs you encounter.

Configure NetScaler ADC from PowerShell

You can use any scripting language that supports REST calls. This section is based on PowerShell 3 and its Invoke-RestMethod cmdlet.

Brandon Olin published a PowerShell module for NetScaler at Github.  💡

CTP Esther Barthel maintains a PowerShell module for NetScaler at https://github.com/cognitionIT/PS-NITRO. See Citrix Synergy TV – SYN325 – Automating NetScaler: talking NITRO with PowerShell for an overview.

The below NetScalerPowerShell.zip contains PowerShell functions that use REST calls to configure a NetScaler appliance. It only takes a few seconds to wipe a NetScaler and configure it with almost everything detailed on this site. A glaring omission is file operations including licenses, certificate files, and customized monitor scripts and the PowerShell script assumes these files are already present on the appliance.

[sdm_download id=”1909″ fancy=”0″]

Most of the functions should work on 10.5 and 11.0 with a few obvious exceptions like RDP Proxy. Here are some other differences between 10.5 and 11.0:

  • PUT operations in NetScaler 11 do not need an entity name in the URL; however 10.5 does require entity names in every PUT URL.
  • https URL for REST calls works without issue in NetScaler 11, but NetScaler 10.5 had inconsistent errors. http works without issue in NetScaler 10.5.

Nitro REST API Documentation

NetScaler Nitro REST API documentation can be found on any NetScaler by clicking the Downloads tab. The documentation is updated whenever you upgrade your firmware.

Look for the Nitro API Documentation.

Extract the files, and then launch index.html.

Start by reading the Getting Started Guide, and then expand the Configuration node to see detailed documentation for every REST call.

The Nitro API is also documented at REST Web Services at Citrix Docs.

Global Server Load Balancing (GSLB) – NetScaler 10.5

Last Modified: Nov 6, 2020 @ 7:11 am

Navigation

This article was written for NetScaler 10.5.

GSLB Planning

GSLB is nothing more than DNS. GSLB is not in the data path. GSLB receives a DNS query and GSLB sends back an IP address, which is exactly how a DNS server works. However, GSLB can do some things that DNS servers can’t do:

  • Don’t give out an IP address unless it is UP (monitoring)
    • If active IP address is down, give out the passive IP address (active/passive)
  • Give out the IP address that is closest to the user (proximity load balancing)
  • Give out different IPs for internal vs external (DNS View)

GSLB is only useful if you have a single DNS name that could resolve to two or more IP addresses. If there’s only one IP address then use normal DNS instead.

Citrix Blog Post Global Server Load Balancing: Part 1 explains how DNS queries work and how GSLB fits in.

Citrix has a good DNS and GSLB Primer.

When configuring GSLB, don’t forget to ask “where is the data?”. For XenApp/XenDesktop, DFS multi-master replication of user profiles is not supported so configure “home” sites for users. More information at Citrix Blog Post XenDesktop, GSLB & DR – Everything you think you know is probably wrong!

GSLB can be enabled both externally and internally. For external GSLB, configure it on the DMZ NetScaler appliances and expose it to the Internet. For internal GSLB, configure it on internal NetScaler appliances. Note: Each NetScaler appliance only has one DNS table so if you try to use one NetScaler for both public and internal then be aware that external users can query for internal GSLB-enabled DNS names.

For internal and external GSLB of the same DNS name on the same appliance, you can use DNS Policies and DNS Views to return different IP addresses depending on where users are connecting from. Citrix CTX130163 How to Configure a GSLB Setup for Internal and External Users Using the Same Host Name.

However, GSLB monitoring applies to the entire GSLB Service so it would take down both internal and external GSLB. If you need different GSLB monitoring for internal and external of the same DNS name, try CNAME:

  • External citrix.company.com:
    • Configure NetScaler GSLB for citrix.company.com.
    • On public DNS, delegate citrix.company.com to the NetScaler DMZ ADNS services.
  • Internal citrix.company.com:
    • Configure NetScaler GSLB for citrixinternal.company.com or something like that.
    • On internal DNS, create CNAME for citrix.company.com to citrixinternal.company.com
    • On internal DNS, delegate citrixinternal.company.com to NetScaler internal ADNS services.

Some IP Addresses are needed on each NetScaler pair:

  • ADNS IP: An IP that will listen for ADNS queries. For external, create a public IP for the ADNS IP and open UDP 53 so Internet-based DNS servers can access it. This can be an existing SNIP on the appliance.
  • GSLB Site IP / MEP IP: A GSLB Site IP that will be used for NetScaler-to-NetScaler communication, which is called MEP or Metric Exchange Protocol. The IP for ADNS can also be used for MEP / GSLB Site.
    • RPC Source IP: RPC traffic is sourced from a SNIP, even if this is different than the GSLB Site IP. It’s less confusing if you use a SNIP as the GSLB Site IP.
    • Public IP: For external GSLB, create public IPs that are NAT’d to the GSLB Site IPs. The same public IP used for ADNS can also be used for MEP. MEP should be routed across the Internet so NetScaler can determine if the remote datacenter has Internet connectivity or not.
    • MEP Port: Open port TCP 3009 between the two NetScaler GSLB Site IPs. Make sure only the NetScalers can access this port on the other NetScaler. Do not allow any other device on the Internet to access this port. This port is encrypted.
    • GSLB Sync Ports: To use GSLB Configuration Sync, open ports TCP 22 and TCP 3008 from the NSIP (management IP) to the remote public IP that is NAT’d to the GSLB Site IP. The GSLB Sync command runs a script in BSD shell and thus NSIP is always the Source IP.
  • DNS Queries: The purpose of GSLB is to resolve a DNS name to one of several potential IP addresses. These IP addresses are usually public IPs that are NAT’d to existing Load Balancing, SSL Offload, Content Switching, or NetScaler Gateway VIPs in each datacenter.
  • IP Summary: In summary, for external GSLB, you will need a minimum of two public IPs in each datacenter:
    • One public IP that is NAT’d to the IP that is used for ADNS and MEP (GSLB Site IP). You only need one IP for ADNS / MEP no matter how many GSLB names are configured. MEP (GSLB Site IP) can be a different IP, if desired.
    • One public IP that is NAT’d to a Load Balancing, SSL Offload, Content Switching, or NetScaler Gateway VIP.
    • If you GSLB-enable multiple DNS names, each DNS name usually resolves to different IPs. This usually means that you will need additional public IPs NAT’d to additional VIPs.

ADNS

  1. Identify a SNIP that you will use for MEP and ADNS.
  2. Configure a public IP for the SNIP and configure firewall rules.
  3. If you wish to use GSLB configuration sync then management access (SSH) must be enabled on this SNIP.
  4. On the left, expand Traffic Management > Load Balancing, and click Services.
  5. On the right, click Add.
  6. Name the service ADNS or similar.
  7. In the IP Address field, enter an appliance SNIP.
  8. In the Protocol field, select ADNS. Then click OK.
  9. Scroll down and click Done.
  10. On the left of the console, expand System, expand Network, and then click IPs.
  11. On the right, you’ll see the SNIP is now marked as the ADNS svc IP. If you don’t see this yet, click the Refresh icon.
  12. Repeat on the other appliance in the other datacenter.
  13. Your NetScaler appliances are now DNS servers.

Metric Exchange Protocol

  1. Open the firewall rules for Metric Exchange Protocol. You can use the same SNIP and same public IP used for ADNS.
  2. On the left, expand Traffic Management, right-click GSLB, and enable the feature.
  3. Expand GSLB, and click Sites.
  4. On the right, click Add.
  5. Add the local site first. Enter a descriptive name and in the Site Type drop-down, select LOCAL.
  6. In the Site IP Address field, enter an appliance SNIP. This SNIP must be in the default Traffic Domain. The NetScaler listens for GSLB MEP traffic on this IP.
  7. For Internet-routed GSLB MEP, in the Public IP Address field, enter the public IP that is NAT’d to the GSLB Site IP (SNIP). For internal GSLB, there is no need to enter anything in the Public IP field. Click Create.
  8. Go back to System > Network > IPs, and verify that the IP is now marked as a GSLB site IP. If you don’t see it yet, click the Refresh button.
  9. If you want to use the GSLB Sync Config feature, then you’ll need to edit the GSLB site IP, and enable Management Access.
  10. Scroll down and enable Management Access. SSH is all you need.
  11. Go to the other appliance and also create the local GSLB site using its GSLB site IP and its public IP that is NAT’d to the GSLB site IP.
  12. In System > Network > IPs on the remote appliance, there should now be a GSLB site IP. This could be a SNIP. If GSLB Sync is desired, enable management access on that IP and ensure SSH is enabled.
  13. Now on each appliance add another GSLB Site, which will be the remote GSLB site.
  14. Enter a descriptive name and select REMOTE as the Site Type.
  15. Enter the other appliance’s actual GSLB Site IP as configured on the appliance. This IP does not need to be reachable.
  16. In the Public IP field, enter the public IP that is NAT’d to the GSLB Site IP on the other appliance. For MEP, TCP 3009 must be open from the local GSLB Site IP to the remote public Site IP. For GSLB sync, TCP 22, and TCP 3008 must be open from the local NSIP to the remote public Site IP. Click Create.
  17. Repeat on the other appliance.
  18. MEP will not function yet since the NetScaler appliances are currently configured to communicate unencrypted on TCP 3011. To fix that, on the left, expand System, expand Network, and click RPC.
  19. On the right, edit the new RPC address (the other site’s GSLB Site IP), and click Edit.
  20. On the bottom, check the box next to Secure, and click OK.
  21. Do the same thing on the other appliance.
  22. If you go back to GSLB > Sites, you should see it as active.

GSLB Services

GSLB Services represent the IP addresses that are returned in DNS Responses. DNS Query = DNS name. DNS Response = IP address.

GSLB should be configured identically on both NetScalers. Since you have no control over which NetScaler will receive the DNS query, you must ensure that both NetScalers are giving out the same DNS responses.

Create the same GSLB Services on both NetScalers:.

  1. Start on the appliance in the primary data center. This appliance should already have a traffic Virtual Server (NetScaler Gateway, Load Balancing, or Content Switching) for the DNS name that you are trying to GSLB enable.
  2. On the left, expand Traffic Management > GSLB, and click Services.
  3. On the right, click Add.
  4. The service name should be similar to the DNS name that you are trying to GSLB. Include the site name in the service name.
  5. Select the LOCAL Site.
  6. On the bottom part, select Virtual Servers, and then select a Virtual Server that is already defined on this appliance. It should automatically fill in the other fields. If you see a message asking if you wish to create a service object, click Yes.
  7. Scroll up and make sure the Service Type is SSL. It’s annoying that NetScaler doesn’t set this drop-down correctly.
  8. The Public IP field contains the actual IP Address that the GSLB ADNS service will hand out. Make sure this Public IP is user accessible. It doesn’t even need to be a NetScaler owned IP.
  9. Scroll down and click OK.
  10. If the GSLB Service IP is a VIP on the local appliance, then GSLB will simply use the state of the local traffic Virtual Server (Load Balancing, Content Switching, or Gateway). If the GSLB Service IP is a VIP on a remote appliance, then GSLB will use MEP to ask the other appliance for the state of the remote traffic Virtual Server. In both cases, there’s no need to bind a monitor to the GSLB Service.
  11. However, you can also bind monitors directly to the GSLB Service. Here are some reasons for doing so:
    • If the GSLB Service IP is a NetScaler-owned traffic VIP, but the monitors bound the traffic Virtual Server are not the same ones you want to use for GSLB. When you bind monitors to the GSLB Services, the monitors bound to the traffic Virtual Server are ignored.
    • If the GSLB Service IP is in a non-default Traffic Domain, then you will need to attach a monitor since GSLB cannot determine the state of Virtual Servers in non-default Traffic Domains.
    • If the GSLB Service IP is not hosted on a NetScaler, then only GSLB Service monitors can determine if the Service IP is up or not.
  12. If you intend to do GSLB active/active and if you need site persistence then you can configure your GSLB Services to use Connection Proxy or HTTP Redirect. See Citrix Blog Post Troubleshooting GSLB Persistence with Fiddler for more details.
  13. Click Done.
  14. On the other datacenter NetScaler, create a GSLB Service.
  15. Select the REMOTE site that is hosting the service.
  16. Since the service is on a different appliance and not this one, you won’t be able to select it using the Virtual Servers option. Instead, select New Server.
  17. For the Server IP, enter the actual VIP configured on the other appliance. This local NetScaler will use GSLB MEP to communicate with the remote NetScaler to find a traffic Virtual Server with this VIP. The remote NetScaler respond if the remote traffic Virtual Server is up or not. The remote Server IP configured here does not need to be directly reachable by this local appliance. If the Server IP is not owned by either NetScaler, then you will need to bind monitors to your GSLB Service.
  18. In the Public IP field, enter the IP address that will be handed out to clients. This is the IP address that users will use to connect to the service. For Public DNS, you enter a Public IP that is usually NAT’d to the traffic VIP. For internal DNS, the Public IP and the Server IP are usually the same.
  19. Scroll up and change the Service Type to match the Virtual Server defined on the other appliance..
  20. Click OK.
  21. Just like the other appliance, you can also configure Site Persistence and GSLB Service Monitors. Click Done when done.
  22. Create more GSLB Services, one for each traffic VIP. GSLB is useless if there’s only one IP address to return. You should have multiple IP addresses (VIPs) through which a web service (e.g. NetScaler Gateway) can be accessed. Each of these VIPs is typically in different datacenters, or on different Internet circuits. The mapping between DNS name and IP addresses is configured in the GSLB vServer, as detailed in the next section.

GSLB Virtual Server

The GSLB Virtual Server is the entity that the DNS name is bound to. GSLB vServer then gives out the IP address of one of the GSLB Services that is bound to it.

Configure the GSLB vServer identically on both appliances:

  1. On the left, expand Traffic Management > GLSB and click Virtual Servers.
  2. On the right, click Add.
  3. Give the GSLB vServer a descriptive name. For active/active, you can name it the same as your DNS name. For active/passive, you will create two GSLB Virtual Servers, one for each datacenter, so include Active or Passive in the Virtual Server name.
  4. Make sure Service Type is set correctly.
  5. If you intend to bind multiple GSLB Services to this GSLB vServer, then you can optionally check the box for Send all “active” service IPs. By default, GSLB only gives out one IP per DNS query. This checkbox always returns all IPs, but the IPs are ordered based on the GSLB Load Balancing Method and/or GSLB Persistence.
  6. Click OK.
  7. On the right, in the Advanced column, click Service.
  8. On the left, click where it says No GSLB Virtual Server to GSLBService Binding.
  9. Click the arrow next to Click to select.
  10. Check the box next to an existing GSLB Service and click OK. If your GSLB is active/passive then only bind one service.
  11. If your GSLB is active/active then bind multiple GSLB Services. Also, you’d probably need to configure GSLB persistence (Source IP or cookies).
  12. Click Bind.
  13. On the right, in the Advanced column, click Domains.
  14. On the left, click where it says No GSLB Virtual Server Domain Binding.
  15. Enter the FQDN that GSLB will resolve.
  16. If this GSLB is active/passive, there are two options:
    • Use the Backup IP field to specify the IP address that will be handed out if the primary NetScaler is inaccessible or if the VIP on the primary appliance is marked down for any reason.
    • Or, create a second GSLB Virtual Server that has the passive GSLB service bound to it. Don’t bind a Domain to the second GSLB Virtual Server. Then edit the Active GSLB Virtual Server and use the Backup Virtual Server section to select the second GSLB Virtual Server.
  17. Click Bind.
  18. If this is active/active GSLB, you can edit the Method section to enable Static Proximity. This assumes the Geo Location database has already been installed on the appliance.
  19. Also for active/active, if you don’t want to use Cookie-based persistence, then you can use the Persistence section to configure Source IP persistence.
  20. Click Done.
  21. If you are configuring active/passive using the backup GSLB Virtual Server method, create a second GSLB Virtual Server that has the passive GSLB service bound to it. Don’t bind a Domain to the second GSLB Virtual Server. Then edit the Active GSLB Virtual Server and use the Backup Virtual Server section to select the second GSLB Virtual Server.

  22. On the left, if you expand Traffic Management > DNS, expand Records, and click Address Records, you’ll see a new DNS record for the GSLB domain you just configured. Notice it is marked as GSLB DOMAIN.

  23. Create identical GSLB Virtual Servers on the other NetScaler appliance. Both NetScalers must be configured identically.
  24. You can also synchronize the GSLB configuration with the remote appliance by going to Traffic Management > GSLB.
  25. On the right, click Sychronize configuration on remote sites.
  26. Use the check boxes on the top, if desired. It’s usually a good idea to Preview the changes before applying them. Then click OK to begin synchronization.

Some notes regarding GSLB Sync:

  • It’s probably more reliable to do it from the CLI by running sync gslb config and one of the config options (e.g. -preview).
  • GSLB Sync runs as a script on the BSD shell and thus always uses the NSIP as the source IP.
  • GSLB Sync connects to the remote GSLB Site IP on TCP 3008 (if RPC is Secure) and TCP 22.

Test GSLB

  1. To test GSLB, simply point nslookup to the ADNS services and submit a DNS query for one of the DNS names bound to a GSLB vServer. Run the query multiple times to make sure you’re getting the response you expect.
  2. Both NetScaler ADNS services should be giving the same response.
  3. To simulate a failure, disable the traffic Virtual Server.
  4. Then the responses should change. Verify on both ADNS services.

  5. Re-enable the traffic Virtual Server, and the responses should return to normal.


DNS Delegation

If you are enabling GSLB for the domain gateway.corp.com, you’ll need to create a delegation at the server that is hosting the corp.com DNS zone. For public GSLB, you need to edit the public DNS zone for corp.com.

DNS Delegation instructions will vary depending on what product host’s the public DNS zone. This section details Microsoft DNS, but it should be similar in BIND or web-based DNS products.

There are two ways to delegate GSLB-enabled DNS names to NetScaler ADNS:

  • Delegate the individual record. For example, delegate gateway.corp.com to the two NetScaler ADNS services (gslb1.corp.com and gslb2.corp.com).
  • Delegate an entire subzone. For example, delegate the subzone gslb.corp.com to the two NetScaler ADNS services. Then create a CNAME record in the parent DNS zone for gateway.corp.com that is aliased to gateway.gslb.corp.com. When DNS queries make it to NetScaler, they will be for gateway.gslb.corp.com and thus gateway.gslb.corp.com needs to be bound to the GSLB Virtual Server instead of gateway.corp.com. For additional delegations, simply create more CNAME records.

This section covers the first method – delegating an individual DNS record:

  1. Run DNS Manager.
  2. First, create Host Records pointing to the ADNS services running on the NetScalers in each data center. These host records for ADNS are used for all GSLB delegations no matter how many GSLB delegations you need to create.
  3. The first Host record is gslb1 (or similar) and should point to the ADNS service (Public IP) on one of the NetScaler appliances.
  4. The second Host record is gslb2 and should point to the ADNS Service (public IP) on the other NetScaler appliance.
  5. If you currently have a host record for the service that you are delegating to GSLB (gateway.corp.com), delete it.
  6. Right-click the parent DNS zone and click New Delegation.
  7. In the Welcome to the New Delegation Wizard page, click Next.
  8. In the Delegated Domain Name page, enter the left part of the DNS record that you are delegating (e.g. gateway). Click Next.
  9. In the Name Servers page, click Add.
  10. This is where you specify gslb1.corp.com and gslb2.corp.com. Enter gslb1.corp.com and click Resolve. Then click OK. If you see a message about the server not being authoritative for the zone, ignore the message.
  11. Then click Add to add the other GSLB ADNS server.
  12. Once both ADNS servers are added to the list, click Next.
  13. In the Completing the New Delegation Wizard page, click Finish.
  14. If you run nslookup against your Microsoft DNS server, it will respond with Non-authoritative answer. That’s because it got the response from NetScaler and not from itself.

That’s all there is to it. Your NetScalers are now DNS servers. For active/passive, the NetScalers will hand out the public IP address of the primary data center. When the primary data center is not accessible, GSLB will hand out the GSLB Service IP bound to the Backup GSLB vServer.

Geo Location Database

If you want to use DNS Policies or Static Proximity GSLB Load Balancing or Responders based on user’s location, import a geo location database. Common free databases are:

For IP2Location, see the blog post Add IP2Location Database as NetScaler’s Location File for instructions on how to import.

For GeoLite Legacy:

  1. Download the GeoLite Country database CSV from http://dev.maxmind.com/geoip/legacy/geolite/.
  2. Note: GeoLite City is actually two files that must be merged as detailed at Citrix Blog Post GeoLite City as NetScaler location database. GeoLite Country doesn’t need any preparation.
  3. Upload the extracted database (.csv file) to the NetScaler appliance at /var/netscaler/locdb.

To import the Geo database:

  1. In the NetScaler GUI, on the left, expand AppExpert, expand Location, and click Static Database (IPv4).
  2. On the right, click Add.
  3. Browse to the location database file.
  4. In the Location Format field, select geoip-country and click Create.
  5. When you open a GSLB Service, the public IP will be translated to a location.

You can use the Geo locations in a DNS Policy, static proximity GSLB Load Balancing, or Responders:

Horizon View Load Balancing – NetScaler 10.5

Last Modified: Nov 6, 2020 @ 7:11 am

Navigation

Use this procedure to load balance Horizon View Connection Servers, Horizon View Security Servers, and/or VMware Access Points.

Overview

A typical Horizon View Installation will have at least six connection servers:

  • Two Internal View Connection Servers – these need to be load balanced on an internal VIP
  • Two DMZ View Security Servers – these need to be load balanced on a DMZ VIP
  • The DMZ View Security Servers are paired with two additional internal View Connection Servers. There is no need to load balance the internal Paired Connection Servers. However, we do need to monitor them.

If you are using Access Points instead of Security Servers then you’ll have the following machines. Server pairing is not necessary.

  • Two Internal View Connection Servers – these need to be load balanced on an internal VIP
  • Two DMZ VMware Access Point appliances – these need to be load balanced on a DMZ VIP

This topic is focused on traditional View Security Servers but could be easily adapted for Access Point appliances. The difference is that with Access Points there are no paired servers and thus there’s no need to monitor the paired servers. The VIP ports are identical for both solutions.

Monitors

Users connect to Horizon View Connection Server, Horizon View Security Server, and Access Point appliances on four ports: TCP 443, TCP 8443, TCP 4172, and UDP 4172. Users will initially connect to port 443 and then be redirected to one of the other ports on the same server initially used for the 443 connection. If one of the ports is down, the entire server should be removed from load balancing. To facilitate this, create a monitor for each of the ports (except UDP 4172).

  1. On the left, expand Traffic Management, expand Load Balancing, and click Monitors.
  2. On the right, click Add.
  3. Name it View-PCOIP or similar.
  4. Change the Type drop-down to TCP.
  5. In the Destination Port field, enter 4172.
  6. Scroll down and click Create.
  7. On the right, click Add.
  8. Name it View-Blast or similar.
  9. Change the Type drop-down to TCP.
  10. In the Destination Port field, enter 8443.
  11. Scroll down and click Create.
  12. On the right, click Add.
  13. Name it View-SSL or similar.
  14. Change the Type drop-down to HTTP-ECV.
  15. In the Destination Port field, enter 443.
  16. Scroll down and check the box next to Secure.
  17. On the Special Parameters tab, in the Send String section, enter GET /broker/xml/
  18. In the Receive String section, enter clientlaunch-default.
  19. Scroll down and click Create.
  20. View Security Servers are paired with View Connection Servers. If the paired View Connection Server is down, then we should probably stop sending users to the corresponding View Security Server. Let’s create a monitor that has a specific IP address in it. Right-click the existing View-SSL or View-SSLAdv monitor and click Add.

  21. Note: this step does not apply to Access Points. Normally a monitor does not have any Destination IP defined, which means it uses the IP address of the service that it is bound to. However, we intend to bind this monitor to the View Security Server but we need it to monitor the paired View Connection Server, which is a different IP address. Type in the IP address of the paired View Connection Server. Then rename the monitor so it includes the View Connection Server name.
  22. Note: this step does not apply to Access Points. Since we are embedding an IP address into the monitor, you have to create a separate monitor for each paired View Connection Server IP.

Servers

Create Server Objects for the DMZ Security Servers and the internal non-paired Connection Servers. Do not create Server Objects for the Paired Connection Servers.

  1. On the left, expand Traffic Management, expand Load Balancing, and click Servers.
  2. On the right, click Add.
  3. Enter a descriptive server name, usually it matches the actual server name.
  4. Enter the IP address of the View Connection Server or View Security Server.
  5. Enter comments to describe the server. Click Create.
  6. Continue adding View Connection Servers or View Security Servers.

Services

If deploying View Security Servers, create Services Objects for the DMZ Security Servers and the internal non-paired Connection Servers. Do not create Services Objects for the Paired Connection Servers.

If deploying Access Points, create Services Objects for the DMZ Access Point appliances and the internal Connection Servers

Each connection server and security server needs separate Service objects. Each Security Server listens on multiple port numbers and thus there will be multiple Services Objects for each Security Server.

For Internal Connection Servers (not the paired servers), load balancing monitoring is very simple:

  • Create services for SSL 443
  • To verify server availability, monitor port TCP 443 on the same server.
  • If tunneling is disabled then internal users connect directly to View Agents and UDP/TCP 4172 and TCP 8443 are not used on Internal Connection Servers. There’s no need to create services and monitors for these ports.

Security Servers and Access Points are more complex:

  • The PCoIP Secure Gateway and HTML Blast Secure Gateway are typically enabled on Security Servers and Access Points but they are not typically enabled on internal Connection Servers.
  • All traffic initially connects on TCP 443. For Security Servers and Access Points, the clients then connect to UDP 4172 or TCP 8443 on the same Security Server. If UDP 4172 or TCP 8443 are down, then you probably want to make sure TCP 443 is also brought down.
  • Each Security Server is paired with an internal Connection Server. If the internal Connection Server is down then the Security Server should be taken down. This does not apply to Access Points.
  • To accommodate these failure scenarios, bind multiple monitors to the View Security Server or Access Point load balancing Services. If any of the monitors fails then NetScaler will no longer forward traffic to 443 on that particular server.

If you have two View Security Servers or Access Points named VSS01 and VSS02, the configuration is summarized as follows (scroll down for detailed configuration):

  • Service = VSS01, Protocol = SSL_BRIDGE, Port = 443
    • Monitors = PCoIP (TCP 4172), SSL (443), and Blast (8443)
    • Monitor = SSL (443) on paired View Connection Server VCS01. This monitor is not needed on Access Points.
  • Service = VSS02, Protocol = SSL_BRIDGE, Port = 443
    • Monitors = PCoIP (TCP 4172), SSL (443), and Blast (8443)
    • Monitor = SSL (443) on paired View Connection Server VCS02. This monitor is not needed on Access Points.
  • Service = VSS01, Protocol = TCP, Port = 4172
    • Monitor = PCoIP (TCP 4172)
  • Service = VSS02, Protocol = TCP, Port = 4172
    • Monitor = PCoIP (TCP 4172)
  • Service = VSS01, Protocol = UDP, Port = 4172
    • Monitor = PCoIP (TCP 4172)
  • Service = VSS02, Protocol = UDP, Port = 4172
    • Monitor = PCoIP (TCP 4172)
  • Service = VSS01, Protocol = SSL_BRIDGE, Port = 8443
    • Monitor = Blast (8443)
  • Service = VSS02, Protocol = SSL_BRIDGE, Port = 8443
    • Monitor = Blast (8443)

If you are not using HTML Blast then you can skip 8443. If you are not using PCoIP Secure Gateway, then you can skip the 4172 ports.

  1. On the left, expand Traffic Management, expand Load Balancing, and click Services.
  2. On the right, click Add.
  3. Give the Service a descriptive name (e.g. svc-VSS01-SSL).
  4. Change the selection to Existing Server and select the View Security Server or internal (non-paired) View Connection Server you created earlier.
  5. Change the Protocol to SSL_BRIDGE and click OK.
  6. On the left, in the Monitors section, click where it says 1 Service to Load Balancing Monitor Binding.
  7. Click Add Binding.
  8. Click the arrow next to Click to select.
  9. Select the View-SSL monitor and click OK.
  10. Then click Bind.
  11. If this is a View Security Server, add monitors for PCoIP and HTML Blast. If any of those services fails, then 443 needs to be marked DOWN.

  12. If this is a View Security Server, also add a monitor that has the IP address of the paired View Connection Server. If the paired View Connection Server is down, then stop sending connections to this View Security Server.
  13. The Last Response should indicate Success. If you bound multiple monitors to the Service, then the member will only be UP if all monitors succeed. There’s a refresh button on the top-right. Click Close when done.
  14. Then click Done.
  15. Right-click the first service and click Add.
  16. Change the name to match the second View Server.
  17. Use the Server drop-down to select to the second View Server.
  18. The remaining configuration is identical to the first server. Click OK.
  19. You will need to configure the monitors again. They will be identical to the first server except for the monitoring of the paired View Connection Server. Click Done when done.

  20. Add another Service for PCoIP on TCP 4172.
    1. Name = svc-VSS01-PCoIPTCP or similar.
    2. Server = Existing Server, select the first View Server.
    3. Protocol = TCP
    4. Port = 4172.
    5. Monitors = View-PCoIP. You can add the other monitors if desired.
  21. Repeat for the 2nd View Security Server.
  22. Add another Service for PCoIP on UDP 4172.
    1. Name = svc-VSS01-PCoIPUDP or similar.
    2. Existing Server = first View Server
    3. Protocol = UDP
    4. Port = 4172.
    5. Monitors = View-PCoIP. You can add the other monitors if desired.
  23. Repeat for the 2nd View Server.
  24. Add another Service for HTML Blast on SSL_BRIDGE 8443.
    1. Name = svc-VSS01-HTMLBlast or similar.
    2. Existing Server = the first View Server
    3. Protocol =
    4. Port = 8443.
    5. Monitors = View-Blast. You can add the other monitors if desired.
  25. Repeat for the 2nd View Server.
  26. The eight services should look something like this:
  27. Repeat these instructions to add the internal (non-paired) View Connection Servers except that you only need to add services for SSL_BRIDGE 443 and only need monitoring for 443.

Load Balancing Virtual Servers

Create separate load balancers for internal and DMZ.

  • Internal load balances the two non-paired Internal View Connections Servers.
  • DMZ load balances the two View Security Servers or Access Point appliances.

The paired View Connection Servers do not need to be load balanced.

For the internal View Connection Servers you only need a load balancer for SSL_BRIDGE 443. If tunneling is disabled then you don’t need load balancers for the other ports (UDP/TCP 4172 and SSL_BRIDGE 8443).

However, tunneling is enabled on the View Security Servers and Access Point appliances so you will need separate load balancers for each port number. Here is a summary of the Virtual Servers:

  • Virtual Server on SSL_BRIDGE 443 – bind both View SSL Services.
  • Virtual Server on UDP 4172 – bind both View PCoIPUDP Services.
  • Virtual Server on TCP 4172 – bind both View PCoIPTCP Services.
  • Virtual Server on SSL_BRIDGE 8443 – bind both View Blast Services.

Do the following to create the Virtual Servers:

  1. On the left, under Traffic Management > Load Balancing, click Virtual Servers.

  2. On the right click Add.
  3. Name it View-SSL-LB or similar.
  4. Change the Protocol to SSL_BRIDGE.
  5. Specify a new internal VIP. This one VIP will be used for all of the Virtual Servers.
  6. Enter 443 as the Port.
  7. Click OK.
  8. On the left, in the Services and Service Groups section, click where it says No Load Balancing Virtual Server Service Binding.
  9. Click the arrow next to Click to select.
  10. Select the two View-SSL Services and click OK.
  11. Click Bind.
  12. Click OK.
  13. Then click Done. Persistency will be configured later.
  14. If this is a View Security Server or Access Point or if tunneling is enabled then create another Load Balancing Virtual Server for PCoIP UDP 4172:
    1. Same VIP as the 443 Load Balancer.
    2. Protocol = UDP, Port = 4172
    3. Services = the PCoIP UDP Services.
  15. If this is a View Security Server or Access Point or if tunneling is enabled then create another Load Balancing Virtual Server for PCoIP TCP 4172:
    1. Same VIP as the 443 Load Balancer.
    2. Protocol = TCP, Port = 4172
    3. Services = the PCoIP TCP Services.
  16. If this is a View Security Server or Access Point or if tunneling is enabled then create another Load Balancing Virtual Server for HTML Blast SSL_BRIDGE 8443:
    1. Same VIP as the 443 Load Balancer.
    2. Protocol = SSL_BRIDGE, Port = 8443
    3. Services = the HTML Blast SSL_BRIDGE Services.
  17. This gives you four Virtual Servers on the same VIP but different protocols and port numbers.

Persistency Group

For Security Servers and Access Point appliances, users will first connect to SSL_BRIDGE 443 and be load balanced. Subsequent connections to the other port numbers must go to the same load balanced server. Create a Persistency Group to facilitate this.

If tunneling is disabled on the internal View Connection Servers then you probably only have one load balancer for those servers and thus you could configure persistence directly on that one load balancer instead of creating a Persistency Group. However, since the View Security Servers have multiple load balancers then you need to bind them together in a Persistency Group.

  1. On the left, under Traffic Management, expand Load Balancing and click Persistency Groups.
  2. On the right, click Add.
  3. Give the Persistency Group a name (e.g. View).
  4. Change the Persistence to SOURCEIP.
  5. Enter a timeout that is equal to or greater than the timeout in View Administrator, which defaults to 10 hours (600 minutes).
  6. In the Virtual Server Name section, click Add.
  7. Move all four View Security Server / Access Point Load Balancing Virtual Servers to the right. Click Create.

Horizon View Configuration

  1. On the View Security Servers (or View Connection Servers), request a certificate that matches the FQDN that resolves to the Load Balancing VIP.
  2. Make sure the private key is exportable.
  3. Set the Friendly Name to vdm and restart the View Security Server services.
  4. In View Administrator, go to View Configuration > Servers.
  5. On the right, switch to the Security Servers tab.
  6. Highlight a server and click Edit.
  7. Change the URLs to the FQDN that resolves to the load balancing VIP.
  8. Change the PCoIP URL to the VIP. For View Security Servers, this is typically a public IP that is NAT’d to the DMZ Load Balancing VIP.

Web Interface Load Balancing – NetScaler 10.5

Last Modified: Nov 6, 2020 @ 6:56 am

Navigation

This procedure is only needed if you are running Web Interface instead of StoreFront.

Monitor

  1. On the left, expand Traffic Management, expand Load Balancing, and click Monitors.
  2. On the right, click Add.
  3. Name it Web Interface or similar.
  4. Change the Type drop-down to CITRIX-WEB-INTERFACE.
  5. If you will use SSL to communicate with the Web Interface servers, then scroll down and check the box next to Secure.
  6. Switch to the Special Parameters tab.
  7. In the Site Path field, enter the path of a XenApp Web site (e.g. /Citrix/XenApp/).
    • Make sure you include the slash (/) on the end of the path or else the monitor won’t work.
    • The site path is also case sensitive.
  8. Click Create.

Servers

  1. On the left, expand Traffic Management, expand Load Balancing, and click Servers.
  2. On the right, click Add.
  3. Enter a descriptive server name, usually it matches the actual server name.
  4. Enter the IP address of the server.
  5. Enter comments to describe the server. Click Create.
  6. Continue adding Web Interface servers.

Service Group

  1. On the left, expand Traffic Management, expand Load Balancing, and click Service Groups.

  2. On the right, click Add.
  3. Give the Service Group a descriptive name (e.g. svcgrp-WI-SSL).
  4. Change the Protocol to HTTP or SSL. If the protocol is SSL, ensure the Web Interface Monitor has Secure enabled.
  5. Scroll down and click OK.
  6. On the right, under Advanced, click Members.
  7. Click where it says No Service Group Member.
  8. If you did not create server objects then enter the IP address of a Web Interface Server. If you previously created a server object then change the selection to Server Based and select the server object.
  9. Enter 80 or 443 as the port. Then click Create.

  10. To add more members, click where it says 1 Service Group Member and then click Add. Click Close when done.

  11. On the right, under Advanced, click Monitors.
  12. On the left, in the Monitors section, click where it says No Service Group to Monitor Binding.
  13. Click the arrow next to Click to select.
  14. Select the Web Interface monitor and click OK.
  15. Then click Bind.
  16. To verify if the monitor is working or not, on the left, in the Service Group Members section, click the Service Group Members line.
  17. Highlight a member and click Monitor Details.
  18. The Last Reponse should indicate that Set-Cookie header was found. Click Close twice when done.
  19. Then click Done.

Load Balancing Virtual Server

  1. Create or install a certificate that will be used by the SSL Virtual Server. This certificate must match the DNS name for the load balanced Web Interface servers.
  2. On the left, under Traffic Management > Load Balancing, click Virtual Servers.

  3. On the right click Add.
  4. Name it Web Interface-SSL-LB or similar.
  5. Change the Protocol to SSL.
  6. Specify a new internal VIP.
  7. Enter 443 as the Port.
  8. Click OK.
  9. On the left, in the Services and Service Groups section, click where it says No Load Balancing Virtual Server ServiceGroup Binding.
  10. Click the arrow next to Click to select.
  11. Select your Web Interface Service Group and click OK.
  12. Click Bind.
  13. Click OK.
  14. Click where it says No Server Certificate.
  15. Click the arrow next to Click to select.
  16. Select the certificate for this Web Interface Load Balancing Virtual Server and click OK.
  17. Click Bind.
  18. Click OK.
  19. On the right, in the Advanced column, click Persistence.
  20. Select SOURCEIP persistence. Note: COOKIEINSERT also works with Web Interface. However, it doesn’t work with StoreFront.
  21. Set the timeout to match the timeout of Web Interface.
  22. The IPv4 Netmask should default to 32 bits.
  23. Click OK.
  24. On the right, in the Advanced column, click SSL Parameters.
  25. If the NetScaler communicates with the Web Interface servers using HTTP (aka SSL Offload), at the top right, check the box next to SSL Redirect. Otherwise the Web Interface page will never display.
  26. Uncheck the box next to SSLv3 and click OK. This removes a security vulnerability.
  27. NetScaler VPX 10.5 build 57 and newer lets you enable TLSv11 and TLSv12. See Citrix Blog – Scoring an A+ at SSLlabs.com with Citrix NetScaler – 2016 update. Click OK.
  28. On the right, in the Advanced column, click SSL Ciphers.
  29. On the left, in the SSL Ciphers section, remove all RC4 ciphers. See Anton van Pelt Make your NetScaler SSL VIPs more secure (Updated) for recommended ciphers.

    You can also run the following from the command line as described by Heikki Harsunen in Citrix Discussions:
    unbind ssl vserver <oursslvservername> -cipherName DEFAULTbind ssl vserver <oursslvservername> -cipherName TLS1-ECDHE-RSA-AES256-SHAbind ssl vserver <oursslvservername> -cipherName TLS1-ECDHE-RSA-AES128-SHAbind ssl vserver <oursslvservername> -cipherName TLS1-ECDHE-RSA-DES-CBC3-SHAbind ssl vserver <oursslvservername> -cipherName TLS1-AES-256-CBC-SHAbind ssl vserver <oursslvservername> -cipherName TLS1-AES-128-CBC-SHAbind ssl vserver <oursslvservername> -cipherName TLS1-DHE-RSA-AES-256-CBC-SHAbind ssl vserver <oursslvservername> -cipherName TLS1-DHE-RSA-AES-128-CBC-SHA
    
  30. Click OK.
  31. Then click Done.
  32. Consider enabling Strict Transport Security by creating a rewrite policy and binding it to this SSL Virtual Server. See Anton van Pelt Make your NetScaler SSL VIPs more secure (Updated).

SSL Redirect – Down vServer Method

If you created an SSL Virtual Server that only listens on SSL 443, users must enter https:// when navigating to the website. To make it easier for the users, create another load balancing Virtual Server on the same VIP that listens on HTTP 80 and then redirects the user’s browser to reconnect on SSL 443.

  1. On the left, under Traffic Management > Load Balancing, click Virtual Servers.

  2. On the right, find the SSL Virtual Server you’ve already created, right-click it and click Add. Doing it this way copies some of the data from the already created Virtual Server.
  3. Change the name to indicate that this new Virtual Server is an SSL Redirect.
  4. Change the Protocol to HTTP on Port 80.
  5. The IP Address should already be filled in. It must match the original SSL Virtual Server. Click OK.
  6. Don’t select any services. This vServer must intentionally be marked down so the redirect will take effect. Click OK.
  7. On the right, in the Advanced column, click Protection.
  8. In the Redirect URL field, enter the full URL including https://. For example: https://citrix.company.com/Citrix/XenApp. Click OK.
  9. Click Done.
  10. When you view the SSL redirect Virtual Server in the list, it will have a state of DOWN. That’s OK. The Port 80 Virtual Server must be DOWN for the redirect to work.

NetScaler Insight Center

Last Modified: Nov 6, 2020 @ 7:12 am

This article is for Insight Center 11.0 and older. Consider Insight Center 11.1, which works with older NetScaler appliances.

Navigation

đź’ˇ = Recently Updated

Planning

Note: HDX Insight only works with Session Reliability on NetScaler 10.5 build 54 or newer. Older builds, including NetScaler 10.1, do not support Session Reliability with HDX Insight. Read the release notes for your NetScaler firmware build to see the latest known issues with AppFlow, Session Reliability, and High Availability.

Requirements for HDX Insight:

  • Your NetScaler appliance must be running Enterprise Edition or Platinum Edition.
  • NetScaler must be 10.1 or newer. Insight Center 11 does work with NetScaler 10.5.
  • HDX Insight works with the following Receivers:
    • Receiver for Windows must be 3.4 or newer.
    • Receiver for Mac must be 11.8 or newer.
    • Receiver for Linux must be 13 or newer.
    • Notice no mobile Receivers. See the Citrix Receiver Feature Matrix for the latest details.
  • ICA traffic must flow through a NetScaler appliance:

 

For ICA round trip time calculations, in a Citrix Policy, enable the following settings:

  • ICA > End User Monitoring > ICA Round Trip Calculation
  • ICA > End User Monitoring > ICA Round Trip Calculation Interval
  • ICA > End User Monitoring > ICA Round Trip Calculation for Idle Connections

Citrix CTX204274 How ICA RTT is calculated on NetScaler Insight: ICA RTT constitutes the actual application delay. ICA_RTT = 1 + 2 + 3 + 4 +5 +6:  💡

  1. Client OS introduced delay
  2. Client to NS introduced network delay (Wan Latency)
  3. NS introduced delay in processing client to NS traffic (Client Side Device Latency)
  4. NS introduced delay in processing NS to Server (XA/XD) traffic (Server Side Device Latency)
  5. NS to Server network delay (DC Latency)
  6. Server (XA/XD) OS introduced delay (Host Delay)

 

For Web Insight, HTML Injection for NetScaler 10.0 is only available in Platinum Edition. In NetScaler 10.1, HTML Injection is available in all editions.

The version/build of Insight Center must be the same or newer than the version/build of the NetScaler appliances.

Insight Center 11 lets you scale the deployment by building multiple nodes. After building the first Insight Center Server, you can go to Configuration > NetScaler Insight Center > Insight Deployment Method to enter some planning data (e.g. # of concurrent ICA connections) and it will tell you the number of Insight Center nodes you should build. The number of nodes is based on the VM specs shown at the top of the page.

In this example, it recommends two Database Nodes and two Connectors. Agents are only used for HTTP traffic. There’s more information at NetScaler Insight Center Deployment Management at docs.citrix.com.

Import Appliance

You can use either the vSphere Client or the vSphere Web Client to import the appliance. In vSphere Client, open the File menu and click Deploy OVF Template. vSphere Web Client instructions are shown below.

You might see this operating system error when not using the vSphere Web Client. Click Yes and proceed. It seems to work.

  1. Download Insight Center for ESX and then extract the .zip file.
  2. In vSphere Web Client, navigate to the vCenter object. Open the Actions menu and click Deploy OVF Template.
  3. In the Select source page, if you see a message regarding the Client Integration Plug-in, download the installer, run it, and then return to this wizard.
  4. In the Select source page, select Local file and browse to the NetScaler Insight .ovf file. Click Next.
  5. In the Review details page, click Next.
  6. In the Select name and folder page, enter a name for the virtual machine and select an inventory folder. Then click Next.
  7. In the Select a resource page, select a cluster or resource pool and click Next.
  8. In the Select storage page, change it to Thin Provision.
  9. Select a datastore and click Next.
  10. In the Setup networks page, choose a valid port group and click Finish.
  11. In the Ready to Complete page, click Finish.
  12. View the progress of the import in the Recent Tasks pane at the top-right of the window.
  13. After the appliance is imported, power it on.

IP Configuration and Multi-Node

  1. Open the console of the virtual machine and configure an IP address.
  2. Insight Center 11 lets you configure a DNS server.
  3. Enter 6 when done.
  4. When prompted for Insight Deployment Type, enter 1 for NetScaler Insight Server. The first appliance must always be NetScaler Insight Server.
  5. Enter Yes to reboot.
  6. Subsequent nodes can be Database Node, Connector node, etc. If you choose one of the other node types it asks you for the IP address of the NetScaler Insight Server node.
  7. Once you’ve built all of the nodes, in the NetScaler Insight Server webpage, go to NetScaler Insight Center > Insight Deployment Management.
  8. Scroll down and click Get.
  9. It should show you the nodes. Then click Deploy.

  10. After it reboots you’ll see the performance of each node.
  11. Since the database is on a separate node, you might want to enable database caching. Go to System > Change Database Cache Settings.
  12. Check the box next to Enable Database Cache.

Initial Web Configuration

  1. Point your browser to the Insight IP address and login as nsroot/nsroot.
  2. Click Get Started

  3. Enter the IP address and credentials of a NetScaler appliance and click Add.

    Note: if your NetScaler appliances require https for management communication then this won’t work. Click Cancel. On the Configuration tab, click System. On the right, in the left column, click Change System Settings.
    Change the drop-down to https and click OK.
    On the left, click Inventory. On the right, click Add.
    Enter the NSIP and nsroot credentials again. This time it should work.
  4. At the top of the page, if desired, check the box next to Enable Geo data collection for Web and HDX Insight.
  5. With Load Balancing selected in the View list, right-click your StoreFront load balancer and click Enable AppFlow.

  6. Type in true and click OK.
  7. Note: if your StoreFront Load Balancing vServer uses Service Groups, you might need to enable AppFlow logging on the Service Group. In the NetScaler GUI, edit the Service Group. In the Basic Settings section, check the box next to AppFlow Logging.
  8. Back in Insight Center, use the View drop-down to select VPN.
  9. Right-click a NetScaler Gateway Virtual Server and click Enable AppFlow.
  10. In the Select Expression drop-down, select true.
  11. For Export Option select ICA and HTTP and click OK. The HTTP option is for Gateway Insight.
  12. The TCP option is for the second appliance in double-hop ICA. If you need double-hop then you’ll also need to run set appflow param -connectionChaining ENABLED on both appliances. See Enabling Data Collection for NetScaler Gateway Appliances Deployed in Double-Hop Mode at docs.citrix.com for more information.
  13. New in NetScaler 11 is the ability to use SOCKS proxy (Cache Redirection) for ICA traffic without requiring users to use NetScaler Gateway and without making any routing changes. You configure this on the NetScaler appliance. See Enabling Data Collection for Monitoring NetScaler ADCs Deployed in LAN User Mode at docs.citrix.com for more information.
  14. If you want to add more appliances, click the Configuration tab. The Inventory node will be selected by default.
  15. On the right, click Add.

Citrix Blog PostNetScaler Insight Center – Tips, Troubleshooting and Upgrade

Nsroot Password

  1. On the Configuration tab, expand System, expand User Administration and click Users.
  2. On the right, highlight the nsroot account and click Edit.
  3. Enter a new password.
  4. You can also specify a session timeout. Click OK.

Management Certificate

The certificate to upload must already be in PEM format. If you have a .pfx, you must convert it to PEM (separate certificate and key files). You can use NetScaler to convert the .pfx and then download the converted certificate from the appliance.

  1. On the left, switch to the System node.
  2. In the right pane, in the left column, click Install SSL Certificate.
  3. Browse to the PEM format certificate and key files. If the keyfile is encyrpted, enter the password. Click OK.
  4. Click Yes to reboot the system.

System Configuration

  1. Click the Configuration tab on the top of the page.
  2. On the left, click the System node.
  3. On the right, modify settings (e.g.Time Zone) as desired.

  4. To set the hostname, click Change Host name.

  5. To change the Session Timeout, click Change System Settings.

  6. The ICA Session Timeout can be configured by clicking the link. Two minutes of non-existent traffic must occur before the session is considered idle. Then this idle timer starts. See Managing ICA Sessions at docs.citrix.com for more information

  7. On the left, expand System and click NTP Servers.
  8. On the right, click Add.

  9. After adding NTP servers, click NTP Synchronization.
  10. Check the box next to Enable NTP Sync and click OK.
  11. On the left, expand Auditing and click Syslog Servers.

  12. On the right, click Add.
  13. Enter the syslog server IP address and select Log Levels. Click Create.
  14. In the Action menu you can click Syslog Parameters to change the timezone and date format.

Email Notifications

  1. On the left, expand System, expand Notifications and click Email.
  2. On the right, on the Email Servers tab, click Add.
  3. Enter the SMTP server address and click Create.
  4. On the right, switch to the Email Distribution List tab and click Add.
  5. Enter an address for a destination distribution list and click Create.

Authentication

  1. On the left, expand System¸ expand Authentication and click LDAP.
  2. On the right, click Add.
  3. This is configured identically to NetScaler. Enter a Load Balancing VIP for LDAP. Change the Security Type to SSL and Port to 636. Scroll down.
  4. Enter the bind account.
  5. Check the box for Enable Change Password.
  6. Click Retrieve Attributes and scroll down.
  7. For Server Logon Attribute select sAMAccountName.
  8. For Group Attribute select memberOf.
  9. For Sub Attribute Name select cn.
  10. To prevent unauthorized users from logging in, configure a Search Filter. Scroll down.
  11. If desired configure Nested Group Extraction.
  12. Click Create.
  13. On the left, expand User Administration and click Groups.
  14. On the right, click Add.
  15. Enter the case sensitive name of your NetScaler Admins group.
  16. Select the admin Permission.
  17. If desired, configure a Session Timeout. Click Create.

  18. On the left, under System, click User Administration.
  19. On the right click User Lockout Configuration.
  20. If desired, check the box next to Enable User Lockout and configure the maximum logon attempts. Click OK.
  21. On the left, under System, click Authentication.
  22. On the right, click Authentication Configuration.
  23. Change the Server Type to LDAP.
  24. Select the LDAP server you created and click OK.

Thresholds

  1. Go to NetScaler Insight Center > Thresholds.
  2. On the right, click Add.
  3. Enter a name.
  4. In the Entity field select a category of alerts. What you choose here determines what’s available in the Rule section.
  5. Check the box to Notify through Email.
  6. In the Rule section, select a rule and enter threshold values. Click Create.

Geo Map

  1. Download the Maxmind database from http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz.
  2. Extract the .gz file.
  3. On the Configuration tab, expand NetScaler Insight Center and click Geo Database Files.
  4. On the right, click the Action drop-down and click Upload.
  5. Browse to the extracted GeoLiteCity.dat file and click Upload.
  6. Click the Inventory node.
  7. Click the IP address for a device in the inventory.
  8. Check the box to Enable Geo data collection for Web and HDX Insight.
  9. You can define Geo locations for internal subnets. Go to NetScaler Insight Center > Private IP Block.
  10. On the right, click Add.
  11. Enter a name.
  12. Enter the starting and ending IP address.
  13. Select a Geo Location. Note that these are not necessarily alphabetical.
  14. Click Create.

Director Integration

Integrating Insight Center with Director requires XenApp/XenDesktop to be licensed for Platinum Edition. The integration adds Network tabs to the Trends and Machine Details views.

If using HTTPS to connect to Insight Center then the Insight Center certificate must be valid and trusted by both the Director Server and the Director user’s browser.

To link Citrix Director with NetScaler HDX Insight, on the Director server run C:\inetpub\wwwroot\Director\tools\DirectorConfig.exe /confignetscaler. Do this on both Director servers.

Use Insight Center

HDX Insight

HDX Insight Dashboard displays ICA session details including the following:

  • WAN Latency
  • DC Latency
  • RTT (round trip time)
  • Retransmits
  • Application Launch Duration
  • Client Type/Version
  • Bandwidth
  • Licenses in use

HDX Insight can also display Geo Maps. Configure Insight Center with Private IP Blocks.

More info at HDX Insight Reports and Use Cases: HDX Insight at docs.citrix.com

Gateway Insight

Insight Center 11.0 build 65 adds a new Gateway Insight dashboard.

This feature displays the following details:

  • Gateway connection failures due to failed EPA scans, failed authentication, failed SSON, or failed application launches.
  • Bandwidth and Bytes Consumed for ICA and other applications accessed through Gateway.
  • # of users
  • Session Modes (clientless, VPN, ICA)
  • Client Operating Systems
  • Client Browsers

More details at Gateway Insight at docs.citrix.com.

Security Insight

The new Security Insight dashboard in 11.0 build 65 and newer uses data from Application Firewall to display Threat Index (criticality of attack), Safety Index (how securely NetScaler is configured), and Actionable Information. More info at Security Insight at docs.citrix.com.
localized image

Troubleshooting

Citrix CTX215130 HDX Insight Diagnostics and Troubleshooting Guide: Syslog messages; Error counters; Troubleshooting checklist, Logs

Citrix Blog PostNetScaler Insight Center – Tips, Troubleshooting and Upgrade

See docs.citrix.com Troubleshooting Tips. Here are sample issues covered in docs.citrix.com:

  • Can’t see records on Insight Center dashboard
  • ICA RTT metrics are incorrect
  • Can’t add NetScaler appliance to inventory
  • Geo maps not displaying

Upgrade Insight Center

  1. Download the latest Upgrade Pack for Insight Center.
  2. Login to Insight Center.
  3. If you are running Insight Center 10.5 or older, on the Configuration tab, go to NetScaler Insight Center > Software Images and upload the file. If running Insight Center 11.0 or newer, you can skip this step.
  4. On the Configuration tab, on the left, click the System node.
  5. On the right, in the right pane, click Upgrade NetScaler Insight Center.
  6. Browse to the build-analytics-11.0.tgz Software Image Upgrade Pack and click OK.
  7. Click Yes to reboot the appliance.

  8. After it reboots, login. The new firmware version will be displayed in the top right corner.

RADIUS Authentication – NetScaler Gateway 10.5

Last Modified: Nov 6, 2020 @ 7:08 am

Navigation

RADIUS Overview

For two-factor authentication using Azure Multi-factor Authentication, see Jason Samuel How to deploy Microsoft Azure MFA & AD Connect with Citrix NetScaler Gateway

Citrix CTX125364 How to Configure Dual Authentication on NetScaler Gateway Enterprise Edition for Use with iPhone and iPad

Some two-factor products (e.g. SMS Passcode) require you to hide the 2nd password field. Receiver 4.4 and newer supports hiding the 2nd field if you configure a Meta tag in index.html. See CTX205907 Dual-Password Field Shows in First Authentication When Connecting to NetScaler Gateway from Windows Receiver for instructions.

Two-factor authentication to NetScaler Gateway requires the RADIUS protocol to be enabled on the two-factor authentication product.

On your RADIUS servers, you’ll need to add the NetScaler appliances as RADIUS Clients. When NetScaler uses a local (same appliance) load balanced Virtual Server for RADIUS authentication, the traffic is sourced from the NetScaler SNIP (Subnet IP). When NetScaler uses a direct connection to a RADIUS Server without going through a load balancing Virtual Server, or uses a remote (different appliance) Load Balancing Virtual Server, the traffic is sourced from the NetScaler NSIP (NetScaler IP). Use the correct IP(s) when adding the appliances as RADIUS Clients. And adjust firewall rules accordingly.

For High Availability pairs, if you locally load balance RADIUS, then you only need to add the SNIP as a RADIUS Client since the SNIP floats between the two appliances. However, if you are not locally load balancing RADIUS, then you’ll need to add the NSIP of both appliances as RADIUS Clients. Use the same RADIUS Secret for both appliances.

Two-factor Policies Summary

When configuring the NetScaler Gateway Virtual Server, you can specify both a Primary authentication policy and a Secondary authentication policy. Users are required to successfully authenticate against both before being authorized for NetScaler Gateway.

For browser-based StoreFront, you need two authentication policies:

  • Primary = LDAPS authentication policy pointing to Active Directory Domain Controllers.
  • Secondary = RADIUS authentication policy pointing to RSA servers with RADIUS enabled.

For Receiver Self-service (native Receiver on mobile, Windows, and Mac), the authentication policies are swapped:

  • Primary = RADIUS authentication policy pointing to RSA servers with RADIUS enabled.
  • Secondary = LDAPS authentication policy pointing to Active Directory Domain Controllers.

If you need to support two-factor authentication from both web browsers and Receiver Self-Service, then you’ll need at least four authentication policies as shown below.

Primary:

  • Priority 90 = RADIUS policy. Expression = REQ.HTTP.HEADER User-Agent CONTAINS CitrixReceiver
  • Priority 100 = LDAP policy. Expression = REQ.HTTP.HEADER User-Agent NOTCONTAINS CitrixReceiver

Secondary:

  • Priority 90 = LDAP policy. Expression = REQ.HTTP.HEADER User-Agent CONTAINS CitrixReceiver
  • Priority 100 = RADIUS policy. Expression = REQ.HTTP.HEADER User-Agent NOTCONTAINS CitrixReceiver

Create Two-factor Policies

Do the following to create the Two-factor policies:

  1. Create an LDAP policy/server.
  2. For RADIUS, on the left, expand NetScaler Gateway, expand Policies, expand Authentication, and click Radius.
  3. On the right, switch to the Servers tab. Click Add.
  4. Give the RADIUS server a name.
  5. Specify the IP address of the RADIUS load balancing Virtual Server.
  6. Enter the secret key specified when you added the NetScalers as RADIUS clients on the RADIUS server. Click Create.

    add authentication radiusAction RSA -serverIP 10.2.2.210 -serverPort 1812 -radKey Passw0rd
  7. On the right, switch to the Policies tab, and click Add.
  8. Name it RSA-SelfService or similar.
  9. Select the RADIUS server created earlier.
  10. Enter an expression. You will need two policies with different expressions. The expression for Receiver Self-Service is HTTP.HEADER User-Agent CONTAINS CitrixReceiver.
  11. Click Create.

    add authentication radiusPolicy RSA-ReceiverForWeb "REQ.HTTP.HEADER User-Agent NOTCONTAINS CitrixReceiver" RSA
    
    add authentication radiusPolicy RSA-ReceiverSelfService "REQ.HTTP.HEADER User-Agent CONTAINS CitrixReceiver" RSA
    
    add authentication ldapPolicy Corp-Gateway-ReceiverForWeb "REQ.HTTP.HEADER User-Agent NOTCONTAINS CitrixReceiver" Corp-Gateway
    
    add authentication ldapPolicy Corp-Gateway-ReceiverSelfService "REQ.HTTP.HEADER User-Agent CONTAINS CitrixReceiver" Corp-Gateway
  12. Create another policy to match the ones shown below. Both RADIUS policies are configured with the same RADIUS server. The only difference between them is the expression (CONTAINS vs NOTCONTAINS)
    Name Expression Server
    RSA-SelfService REQ.HTTP.HEADER User-Agent CONTAINS CitrixReceiver RSA
    RSA-Web REQ.HTTP.HEADER User-Agent NOTCONTAINS CitrixReceiver RSA

  13. Go to NetScaler Gateway\Policies\Authentication\LDAP. On the Policies tab, create two policies with the expressions shown below. Both LDAP policies are configured with the same LDAP server. The only difference between them is the expression (CONTAINS vs NOTCONTAINS).
    Name Expression Server
    LDAP-Corp-SelfService REQ.HTTP.HEADER User-Agent CONTAINS CitrixReceiver LDAP-Corp
    LDAP-Corp-Web REQ.HTTP.HEADER User-Agent NOTCONTAINS CitrixReceiver LDAP-Corp

Bind Two-factor Policies to Gateway

  1. When you create the NetScaler Gateway Virtual Server, bind the policies as shown in the following table. Priority doesn’t matter because they are mutually exclusive.
    Policy Name Type Bind Point
    LDAP-Corp-Web LDAP Primary
    RSA-SelfService RADIUS Primary
    LDAP-Corp-SelfService LDAP Secondary
    RSA-Web RADIUS Secondary

    bind vpn vserver gateway.corp.com -policy Corp-Gateway-ReceiverForWeb -priority 100
    
    bind vpn vserver gateway.corp.com -policy RSA-ReceiverSelfService -priority 110
    
    bind vpn vserver gateway.corp.com -policy RSA-ReceiverForWeb -priority 100 -secondary
    
    bind vpn vserver gateway.corp.com -policy Corp-Gateway-ReceiverSelfService -priority 110 -secondary
  2. The session policy/profile for Receiver Self-Service needs to be adjusted to indicate which authentication field contains the Active Directory password. On the Client Experience tab or the Session Profile is Credential Index. This needs to be changed to SECONDARY. Leave the session policy for Web Browsers set to Primary.

    set vpn sessionAction "Receiver Self-Service" -ssoCredential SECONDARY
  3. On the StoreFront server, when creating the NetScaler Gateway object, change the Logon type to Domain and security token.

RADIUS Load Balancing – NetScaler 10.5

Last Modified: Nov 6, 2020 @ 6:54 am

Navigation

RADIUS Load Balancing Overview

Two-factor authentication to NetScaler Gateway requires the RADIUS protocol to be enabled on the two-factor authentication product.

On your RADIUS servers you’ll need to add the NetScaler appliances as RADIUS Clients. When NetScaler uses a local (same appliance) load balanced Virtual Server for RADIUS authentication, the traffic is sourced from the NetScaler SNIP (Subnet IP). When NetScaler uses a direct connection to a RADIUS Server without going through a load balancing Virtual Server, or uses a remote (different appliance) Load Balancing Virtual Server, the traffic is sourced from the NetScaler NSIP (NetScaler IP). Use the correct IP(s) when adding the NetScaler appliances as RADIUS Clients. And adjust firewall rules accordingly.

For High Availability pairs, if you locally load balance RADIUS, then you only need to add the SNIP as a RADIUS Client since the SNIP floats between the two appliances. However, if you are not locally load balancing RADIUS, then you’ll need to add the NSIP of both appliances as RADIUS Clients. Use the same RADIUS Secret for both appliances.

When load balancing RADIUS, you’ll want a monitor that verifies that the RADIUS server is functional. The RADIUS monitor will login to the RADIUS server and look for a response. You will need static credentials that the RADIUS monitor can use to login to the RADIUS server.

If you don’t want your monitor to login to RADIUS, then the only other monitoring option is Ping. Adjust the firewall accordingly.

If you have RADIUS Servers in multiple datacenters, you can create multiple load balancing Virtual Servers and cascade them so that the local RADIUS Servers are used first and if they’re not available then the Virtual Server fails over to RADIUS Servers in remote datacenters.

RADIUS Monitor

The RADIUS Monitor attempts to successfully log into the RADIUS server. For RSA, create an account on RSA with the following parameters as mentioned by Jonathan Pitre:

  • Setup a user with a fixed passcode in your RSA console.
  • Ensure you login with that user at least once to the RSA console because you’ll be asked to change it the first time.
  • There is no need to assign a token to your monitor user as long as you are using a fixed passcode. You don’t want to waste a token on a user just for monitoring.

Henny Louwers – Configure RSA RADIUS monitoring on NetScaler:

  1. In the NetScaler Configuration Utility, on the left under Traffic Management > Load Balancing, click Monitors.
  2. On the right, click Add.
  3. Name the monitor RSA or similar. Change the Type drop-down to RADIUS.
  4. On the Standard Parameters tab, you might have to increase the Response Time-out to 4.
  5. On the Special Parameters tab, enter valid RADIUS credentials. Make sure these credentials do not change or expire. For RSA, in the Password field, enter the fixed passcode.
  6. Also enter the RADIUS key configured on the RADIUS server for the NetScaler as RADIUS client.
  7. For Response Codes, add both 2 or 3. 2 means success while 3 indicates some kind of failure. Either result means that the RADIUS server is responding and thus is probably functional. But 2 is the ideal response.
  8. Click Create when done.
    add lb monitor RSA RADIUS -respCode 2-3 -userName ctxsvc -password Passw0rd -radKey Passw0rd -resptimeout 4

Servers

  1. On the left, expand Traffic Management, expand Load Balancing, and click Servers.
  2. On the right, click Add.
  3. Enter a descriptive server name; usually it matches the actual server name.
  4. Enter the IP address of the server.
  5. Enter comments to describe the server. Click Create.

    add server RSA01 192.168.123.13
    add server RSA02 192.168.123.14
  6. Continue adding RADIUS servers.

Service Groups

  1. On the left, expand Traffic Management, expand Load Balancing, and click Service Groups.
  2. On the right click Add.
  3. You will create one Service Group per datacenter. Enter a name reflecting the name of the datacenter.
  4. Change the Protocol to RADIUS.
  5. Click OK.
  6. On the right, in the Advanced column, click Members.
  7. On the left, in the Service Group Members section, click where it says No Service Group Member.
  8. If you did not create server objects then enter the IP address of a RADIUS Server in this datacenter. If you previously created a server object, then change the selection to Server Based, and select the server object.
  9. In the Port field, enter 1812 (RADIUS).
  10. Click Create.

  11. To add more members, in the Service Group Members section, click where it says 1 Service Group Member.
  12. Click Add to add another member. Click Close when done.
  13. On the right, in the Advanced column, click Monitors.
  14. On the left, in the Monitors section, click where it says No Service Group to Monitor Binding.
  15. Click the arrow next to Click  to select.
  16. Select your new RADIUS monitor, and click OK.
  17. Click Bind.
  18. To verify the member is up, click in the Service Group Members section.
  19. Highlight a member and click Monitor Details.
  20. It should say Radius response code 2 (or 3) received. Click OK.
  21. Click Done to finish creating the Service Group.

    add serviceGroup svcgrp-RSA RADIUS
    bind serviceGroup svcgrp-RSA RSA01 1812
    bind serviceGroup svcgrp-RSA -monitorName RSA
  22. The Service Group is displayed as UP.
  23. Add additional service groups for Radius servers in each data center.

Virtual Server

  1. On the left, expand Traffic Management, expand Load Balancing, and click Virtual Servers.

  2. On the right, click Add.
  3. Name it lbvip-RADIUS-HQ or similar. You will create one Virtual Server per datacenter so include the datacenter name.
  4. Change the Protocol drop-down to RADIUS.
  5. Enter a Virtual IP. This VIP cannot conflict with any other IP + Port already being used. You can use an existing VIP if the VIP is not already listening on UDP 1812.
  6. Enter 1812 as the Port. Click OK.
  7. In the Services and Service Groups section, click where it says No Load Balancing Virtual Server ServiceGroup Binding.
  8. Click the arrow next to Click to select.
  9. Select a previously created Service Group and click OK.
  10. Click Bind.
  11. Click OK.
  12. Configuring RADIUS Load Balancing with Persistence at Citrix Docs recommends Rule Based Load Balancing. On the right, in the Advanced Settings column, add the Method section.
  13. Change the Load Balancing Method to TOKEN.
  14. In the Expression field, enter UDP.RADIUS.USERNAME and click OK.
  15. Click Done to finish creating the Virtual Server.
  16. If you are configuring this RADIUS Load Balancer for more than just NetScaler Gateway, you can add another Load Balancer on port 1813 for RADIUS Accounting. Then you need a Persistency Group to tie the two load balancers together. See Configuring RADIUS Load Balancing with Persistence at Citrix Docs.
    add lb vserver lbvip-RSA RADIUS 10.2.2.210 1812 -persistenceType RULE -lbMethod TOKEN -rule CLIENT.UDP.RADIUS.USERNAME
    bind lb vserver lbvip-RSA svcgrp-RSA
  17. The new Virtual Server should show as Up.
  18. Create additional Virtual Servers for each datacenter. These additional Virtual Servers do not need a VIP so change the IP Address Type to Non Addressable. Only the first Virtual Server will be directly accessible.

    add lb vserver lbvip-RSA-Backup RADIUS 0.0.0.0 0 -persistenceType NONE -cltTimeout 120
  19. Notice that the additional datacenter Virtual Servers show up with an IP Address of 0.0.0.0 and port of 0.
  20. After you are done creating a Virtual Server for each datacenter, right-click the primary datacenter’s Virtual Server, and click Edit.
  21. On the right, in the Advanced column, click Protection.
  22. On the left, in the Protection section, change the Backup Virtual Server to one of the other datacenter Virtual Servers. If all of the services in this datacenter are DOWN, the backup Virtual Server will be used instead. You can cascade multiple Virtual Servers using this method. Click OK and Done.

    set lb vserver lbvip-RSA -backupVServer lbvip-RSA-Backup
  23. You may now use this Virtual IP in your RADIUS authentication policies for NetScaler Gateway or NetScaler management login.

Citrix ADC and CVAD Firewall Rules

Last Modified: Jul 8, 2021 @ 6:45 am

Navigation

See CTX101810 Communication Ports Used by Citrix Technologies

đź’ˇ = Recently Updated

Change Log

Citrix ADC Firewall Rules

From To Protocol / Port Purpose
Administrator machines NSIPs (and/or SNIPs) TCP 22
TCP 80
TCP 443
TCP 3010
TCP 3008
SSH and HTTP/SSL access to NetScaler configuration GUI. TCP 3008/3010 is Java and 3008 is used if traffic is encrypted. Java not needed in 10.5 build 57 and newer.
Administrator machines NetScaler SDX SVM, XenServer TCP 22
TCP 80
TCP 443
To administer NetScaler SDX
Administrator machines NetScaler Lights Out Module TCP 443
TCP 623
TCP 5900
CTX200367
NSIP
SNIP
DNS servers Ping
UDP 53
TCP 53
Ping is used for monitoring. Can be turned off by load balancing on the same appliance.
NSIPs
SNIP
NetScaler MAS TCP 27000
TCP 7279
Pooled Licensing
NSIPs
SNIP
NTP servers UDP 123 NTP
NSIPs
SNIP
Syslog server UDP 514 Syslog
NSIPs callhome.citrix.com
cis.citrix.com
taas.citrix.com
TCP 443 Call Home
NSIPs (default)
SNIP
LDAP Servers(Domain Controllers) TCP 389 (Start TLS)
TCP 636 (Secure LDAP)
Secure LDAP requires certificates on the Domain Controllers. Secure LDAP enables password changes when they expire.SNIP if Load Balanced on same appliance
NSIPs LDAP Servers TCP 389
TCP 636
Monitor Domain Controllers
NSIPs (default)
SNIP
RADIUS servers UDP 1812 RADIUS is used for two-factor authentication. SNIP if Load Balanced on same appliance
SNIP RADIUS servers UDP 1812
Ping
Monitor RADIUS servers
NetScaler SDX Service virtual machine NSIPs Ping
TCP 22
TCP 80
TCP 443
Only if NetScaler VPX runs as a virtual machine on top of NetScaler SDX
Local GSLB Site IP
SNIP
GSLB Site IP (public IP) in other datacenter TCP 3009
TCP 3011
GSLB Metric Exchange Protocol between appliance pairs
NSIPs GSLB Site IP (public IP) in other datacenter TCP 22
TCP 3008
TCP 3010
GSLB Configuration Sync
Local GSLB Site IP
SNIP
All Internet Ping
UDP 53
TCP (high ports)
RTT to DNS Servers for Dynamic Proximity determination
SNIP StoreFront Load Balancing VIP TCP 443 NetScaler Gateway communicates with StoreFront
SNIP StoreFront servers TCP 80
TCP 443
TCP 808
StoreFront Load Balancing
NSIPs StoreFront servers TCP 80
TCP 443
Monitor StoreFront servers
StoreFront servers NetScaler Gateway VIP (DMZ IP) TCP 443 Authentication callback from StoreFront server to NetScaler Gateway.
SNIP Each individual Delivery Controller in every datacenter TCP 80
TCP 443
Secure Ticket Authorities. This cannot be load balanced.
TCP 443 only if certificates are installed on the Delivery Controllers.
SNIP All internal virtual desktops and session hosts (subnet rule?) TCP 1494
TCP 2598
UDP 1494
UDP 2598
UDP 16500-16509
HDX ICA
Enlightened Data Transport
Session Reliability
UDP Audio
All Internet
All internal users
NetScaler Gateway VIP (public IP) TCP 80
TCP 443
UDP 443
Connections from browsers and native Receivers
DTLS for UDP Audio
All Internet
All internal DNS servers
SNIP ADNS Listener (Public IP) UDP 53
TCP 53
ADNS (for GSLB)
Web logging server NSIPs TCP 3010 Web logging polls the NetScalers.
NSIPs NetScaler MAS or other SNMP Trap Destination UDP 161
UDP 162
SNMP Traps
NSIPs
SNIP
NetScaler MAS or other AppFlow Collector UDP 4739
TCP 5557, 5558
TCP 5563
AppFlow (IPFIX, Logstream, and Metrics)
NSIP mfa.cloud.com
trust.citrixworkspacesapi.net
TCP 443 Native OTP Push (DNS required)
  • Authentication traffic uses NSIPs by default. This can be changed by creating a local Load Balancing Virtual Server on the same appliance and sending authentication traffic through the Load Balancing VIP.
  • Several of the Load Balancing monitors run as Perl scripts, which are sourced from the NSIPs, not SNIP. But actual load balancing traffic uses SNIP as the source IP.
  • DNS Name Servers use ping for monitoring. This can be disabled by creating a local Load Balancing Virtual Server on the same appliance and sending DNS traffic through the load balancer.
  • In a ADC with a dedicated management network and default route on a different data network, configure Policy Based Routes (PBRs) to send NSIP-sourced traffic through a router on the NSIP subnet.
  • Logstream defaults to SNIP as source but can be changed to NSIP. See CTX286215.

Citrix ADM Firewall Rules

Citrix Application Delivery Management (ADM) monitors and manages the ADC appliances.

From To Protocol / Port Purpose
ADM Floating IP
ADM Agent
NSIPs Ping
TCP 22
TCP 80
TCP 443
Discovery and configuration of ADC devices
NSIPs ADM Floating IP
ADM Agent
TCP 80
TCP 443
Nitro
ADM (Primary, Secondary) NSIPs UDP 161 SNMP
ADM Agents ADM Floating IP TCP 443
TCP 7443
TCP 8443
Agent Communication
NSIPs ADM Floating IP
ADM Agent
UDP 4739 AppFlow
SNIP ADM Floating IP
ADM Agent
TCP 5563 Metrics Collector
NSIPs
SNIP
ADM Floating IP
ADM Agent
TCP 5557, 5558 Logstream (ULFD)
NSIPs ADM Floating IP
ADM Agent
UDP 161
UDP 162
SNMP Traps
NSIPs ADM Floating IP
ADM Agent
UDP 514 Syslog
CPX NSIPs
VPX NSIPs
ADM Floating IP
ADM Agent
TCP 27000
TCP 7279
Pooled Licensing
Administrator Machines ADM Floating IP
ADM Agent
TCP 22
TCP 80
TCP 443
Web-based GUI
Director Servers ADM Floating IP TCP 80
TCP 443
Insight Integration with Director
ADM LDAP(S)
LDAP(S) VIP
TCP 389
TCP 636
LDAP authentication
ADM Mail Server TCP 25 Email alerts
ADM NTP Server UDP 123 NTP
ADM Syslog Server UDP 514 Syslog

Citrix Virtual Apps and Desktops Firewall Rules

From To Protocol / Port Purpose
Administrator machines Delivery Controllers TCP 80/443
TCP 3389
PowerShell
RDP
Delivery Controllers SQL Server TCP 1433
UDP 1434
Other static port
SQL database
Delivery Controllers vCenter TCP 443 vCenter
Delivery Controllers SCVMM (Hyper-V) TCP 8100 SCVMM
Delivery Controllers Citrix Licensing TCP 27000
TCP 7279
TCP 8082-8083
Citrix Licensing
StoreFront servers Delivery Controllers TCP 80
TCP 443
XML
Secure Ticket Authority
StoreFront servers StoreFront servers TCP 808 Subscription Replication
StoreFront servers Domain Controllers in Trusted Domains TCP 88
TCP 135
TCP 445
TCP 389/636
TCP 49151-65535
RPC
Discussions
Administrator machines StoreFront servers TCP 3389 RDP
Administrator machines Citrix Licensing TCP 8082-8083
TCP 3389
Web-based administration GUI
RDP
Delivery Controllers All VDAs TCP 80 Brokering
All VDAs Delivery Controllers TCP 80 Registration
All VDAs Global Catalogs
(Domain Controllers)
TCP 3268 Registration
All Server OS VDAs Remote Desktop Licensing Server RPC and SMB Remote Desktop Licensing
All Workspace apps
(Internal)
StoreFront SSL Load Balancing VIP TCP 80
TCP 443
Internal access to StoreFront
All Workspace apps Citrix Gateway VIP TCP 80
TCP 443
External (or internal) access to Citrix Gateway
All Workspace apps
(Internal)
All VDAs TCP 1494
UDP 1494
TCP 2598
UDP 2598
UDP 16500-16509
ICA/HDX
EDT
Session Reliability
UDP Audio
Administrator machines Director TCP 3389 RDP
Administrator machines
Help Desk machines
Director TCP 80
TCP 443
Web-based GUI
Director Delivery Controllers TCP 80
TCP 443
Director
Administrator machines
Help Desk machines
All VDAs TCP 135
TCP 3389
Remote Assistance

Also see Microsoft Technet Which ports are used by a RDS 2012 deployment?

Citrix Provisioning Firewall Rules

From To Protocol / Port Purpose
Provisioning Servers SQL Server TCP 1433
UDP 1434
Other static port
SQL database for Provisioning Services
Provisioning Servers Provisioning Servers SMB File copy of vDisk files
Provisioning Servers Provisioning Servers UDP 6890-6909 Inter-server communication
Provisioning Servers Citrix Licensing TCP 27000
TCP 7279
TCP 8082-8083
TCP 80
Citrix Licensing
Provisioning Servers Controllers TCP 80
TCP 443
Setup Wizards to create machines
Provisioning Servers vCenter TCP 443 Setup Wizards to create machines
Provisioning Servers Target Devices UDP 6901
UDP 6902
UDP 6905
Provisioning Services Console Target Device power actions (e.g. Restart)
Administrator machines Provisioning Servers TCP 3389
TCP 54321
TCP 54322
TCP 54323
RDP
SOAP
Controllers Provisioning Servers TCP 54321
TCP 54322
TCP 54323
Add machines to Catalog
Target Devices DHCP Servers UDP 67 DHCP
Target Devices KMS Server TCP 1688 KMS Licensing
Target Devices Provisioning Servers UDP 69
UDP 67/4011
UDP 6910-6969
TFTP
PXE
Streaming (expanded port range)
Target Devices Provisioning Servers UDP 6969
UDP 2071
Two-stage boot (BDM)
Target Devices Provisioning Servers TCP 54321
TCP 54322
TCP 54323
Imaging Wizard to SOAP Service