- Persistent vs Non-persistent
- Zones (XenApp/XenDesktop 7.7 and newer)
- Zone Preference (XenApp/XenDesktop 7.11 and newer)
- Machine Creation Services
- Controller – Name Cache 💡
- Delivery Group Published Apps and Desktops in 7.8 and newer
- Tags in XenApp/XenDesktop 7.12 and newer 💡
- RDSH Scheduled Restart
- Allow one user to have Multiple Sessions
- Static Catalog – Export/Import Machine Assignments
- Monitor Number of Free Desktops
- Published Applications
💡 = Recently Updated
Persistent vs Non-persistent
VDA design – One of the tasks of a Citrix Architect is VDA design. There are many considerations, including the following:
- Machine type – single user (virtual desktop), or multi-user (Remote Desktop Session Host). RDSH is more hardware efficient.
- Machine operating system – Windows 7, Windows 10, Windows Server 2008 R2, Windows Server 2012 R2, Windows Server 2016
- Machine persistence – persistent, non-persistent
- Number of new machines – concurrent vs named-users
- Machine provisioning – full clones, Machine Creation Services (MCS), Provisioning Services (PvS)
- Hardware for the new machines – hypervisor clusters, storage
- How the machines are updated – SCCM, MCS, PvS, etc.
- Application integration – locally installed, App-V, Layering, XenApp published, leave on local endpoint machine, cloud apps, etc.
- User Profiles – roaming, mandatory, home directories
- Group Policies – session lockdown, automation
- Disaster Recovery – replication. VDAs running in a warm site. DR for profiles and home directories too.
Desktop Management in a Citrix environment – Some environments try to use Citrix to improve desktop management. Here are some desktop management aspects of Citrix that aren’t possible with distributed physical desktops:
- Datacenter network speeds – The VDAs have high speed connectivity to the desktop management tools, which eliminates WAN bandwidth as a desktop management consideration. For example, you can use Microsoft App-V to stream apps to VDAs.
- Non-persistence – Non-persistent VDAs revert at every reboot. To update non-persistent VDAs, simply update your master image.
- Layering – The VDA VMs can be composed of multiple layers that are combined during machine boot, or when the user logs in. Citrix AppDisk and Unidesk are examples of this technology. A single layer can be shared by multiple VDAs. The layers are updated once, and all machines using the layer receive the updated layer at next boot/login.
Non-persistent VDAs – Probably the easiest of these desktop-management technologies to implement is non-persistence. However, there are several drawbacks to non-persistence:
- Master Images must be designed – Which apps go on which master image? Do you install the same app on multiple master images?
- How do you know which apps a user needs? – Most Citrix admins, and even desktop teams, don’t know every app that a user needs. You can use tools like Liquidware Labs or Lakeside Software to discover app usage, but it’s a very complicated process to find commonality across multiple users.
- How are One-off apps handled? – If you have an app used by only a small number of users, do you add it to one of your master images? Do you create a new master image? Do you publish it from XenApp (double hop)? Do you stream it using App-V? Layering is another option.
- Application Licensing – for licensed apps, do you install the licensed app into the master image and try to hide it from non-licensed users? Or do you create a new master image for the licensed users?
- Patching multiple images – when a new OS patch needs to be deployed, you have to update every master image running that OS version. Thus Citrix admins usually try to limit the number of master images, which makes image design more complicated.
- How do you manage an app that is installed on multiple master images? – Layering might help with this.
- Who manages the master images? – Citrix admins? Desktop team? It’s unlikely that traditional desktop management tools (e.g. SCCM) will ever be completely removed from an enterprise environment, which means that master image management is an additional task that was not performed before. Does the Citrix admin team have the staff to take on this responsibility? Would the desktop management team be willing to perform this new process?
- Politically feasible? – Large enterprises usually have mature desktop management practices. Would this new process interfere with existing desktop management requirements?
- Responsibility – if the Citrix admins are not maintaining the master images, and if a Catalog update causes user problems, who is responsible?
- RDSH Apps are complicated – who is responsible for integrating apps into Remote Desktop Session Host (XenApp)? Does the desktop team have the skills to perform the additional RDSH testing?
- Change Control – Longer Deployment Times – Any change to a master image would affect every machine/user using that image, thus dev/QA testing is recommended for every change, which slows down app update deployment. And once a change is made to the master, it doesn’t take effect until the user’s VDA is rebooted.
- Roaming Profiles – some apps (e.g. Office) save user settings in user profiles. Since the machines are non-persistent, the profiles would be lost on every reboot unless roaming profiles are implemented. This adds a dependency on roaming profile configuration, and the roaming profile file share.
- How is the Outlook OST file handled? – With Cloud Hosted Exchange, for best performance, Outlook needs to run in Cached Exchange mode. How is the large OST file roamed? One option is to use group policy to minimize the size of the OST file. Another is to purchase a 3rd party OST handling product like FSLogix.
- IT Applications (e.g. antivirus) on non-persistent machines – Many IT apps (antivirus. asset mgmt, security, etc.) have special instructions to work on non-persistent machines. Search the vendor’s knowledgebase for VDI, non-persistent, Citrix, etc. Antivirus in particular has a huge impact on VDA performance. And the special instructions for non-persistent VDAs are in addition to normal antivirus configuration.
- Connection Leasing does not support non-persistent virtual desktops – if the XenDesktop SQL database is down, Connection Leasing won’t help you. It’s not possible to connect to non-persistent virtual desktops until the XenDesktop SQL database connection is recovered. This affects multi-datacenter designs.
Application Integration Technologies – Additional technologies can be used to overcome some of the drawbacks of non-persistent machines:
- Microsoft App-V – this technology can dynamically stream apps to a non-persistent image. Different users get different apps. And the apps run in isolated bubbles. However:
- App-V is an additional infrastructure that must be built and maintained.
- App-V requires additional skills for the people packaging the apps, and the people troubleshooting the apps.
- Since the apps are isolated, app interaction is configured manually.
- Because of application isolation, not every app can run in App-V. Maybe 60-80% of apps might work. How do you handle apps that don’t work?
- Layering – each application is a different layer (VHD file). The layering tool combines multiple layers into a single unified image. Layers are updated in one place, and all images using the layer are updated, which solves the issue of a single app in multiple images. Layering does not use application isolation, so almost 100% of apps should work with layering. Layers can be mounted dynamically based on who’s logging in. There’s also a persistent layer that lets users install apps, or admins can install one-off apps. Unidesk is probably the most feature rich of the layering products. However:
- Unidesk is not free. Citrix AppDisk is free, but it’s features are very limited.
- Unidesk is a separate infrastructure that must be built and maintained. Citrix AppDisk is built into XenDesktop.
- Somebody has to create the layers. This is extremely easy in Unidesk since you simply install the applications normally (no new skills to learn). However, it’s an additional task on top of normal desktop management packaging duties.
Persistent virtual desktops – Another method of building VDAs is by creating full clone virtual desktops that are persistent. Each virtual desktop is managed separately using traditional desktop management tools. If your storage is an All Flash Array with inline deduplication and compression, then full clone persistent virtual desktops probably take no more disk space than non-persistent linked clones. (Note: persistent RDSH VDAs are not included in this section since RDSH user sessions are essentially non-persistent) Here are some advantages of full clone persistent virtual desktops as opposed to non-persistent VDAs:
- Skills and Processes – No new skills to learn. No new desktop management processes. Use existing desktop management tools (e.g. SCCM). The existing desktop management team can manage the persistent virtual desktops, which reduces the workload of the Citrix admins.
- One-off applications – If a user needs a one-off applications, simply install it on the user’s persistent desktop. The application can be user-installed, SCCM self-service installed, or administrator installed.
- User Profile – Outlook’s OST file is no longer a concern since the user’s profile persists on the user’s virtual desktop. It’s not necessary to implement roaming profiles when using persistent virtual desktops. If you want a process to move a user profile from one persistent virtual desktop to another, how do you do it on physical desktops today?
- API integration – a self-service portal can use VMware PowerCLI and Citrix’s PowerShell SDK to automatically create a new persistent virtual desktop for a user. Chargeback can also be implemented.
- Offline XenDesktop SQL Database – if the Citrix XenDesktop SQL database is not reachable, then Citrix Connection Leasing can still broker sessions to persistent virtual desktops that have already been assigned to users. This is not possible with non-persistent virtual desktops.
Concurrent vs Named User – one advantage of non-persistent virtual desktops is that you only need enough virtual desktops to handle the concurrent user load. With persistent virtual desktops, you need a separate machine for each named user, whether that user is using it or not.
Disaster Recovery – for non-persistent VDAs, one option is to replicate the master images to the DR site, and then create a Catalog of machines either before the disaster, or after. If before the disaster, the VDAs will already be running and ready for connections; however, the master images are maintained separately in each datacenter.
Persistent virtual desktops have several disaster recovery options:
- Immediately after the disaster, instruct the persistent users to connect to a pool of non-persistent machines.
- In the DR site, create new persistent virtual desktops for the users. Users would then need to use SCCM or similar to reinstall their apps. Scripts can be used to backup the user’s profile and restore it on the DR desktop. This method is probably closest to how recovery is performed on physical desktops.
- The persistent virtual desktops can be replicated and recovered in the DR site. When the machines are added to Citrix Studio in DR, each machine is assigned to specific users. This process is usually scripted.
Caveats – Zones let you stretch a single XenApp/XenDesktop site/farm across multiple datacenters. However, note these caveats:
- Studio – If all Delivery Controllers in the Primary Zone are down, then you can’t manage the farm/site. This is true even if SQL is up, and Delivery Controllers are available in Satellite Zones. It’s possible to designate an existing zone as the Primary Zone by running
Set-ConfigSite -PrimaryZone <Zone>, where <Zone> can be name, UID, or a Zone object.
- Version/Upgrade – All Delivery Controllers in the site/farm must be the same version. During an upgrade, you must upgrade every Delivery Controller in every zone.
- Offline database – In XenApp/XenDesktop 7.11 and older, there is no offline database option similar to XenApp 6.5’s Local Host Cache. If the database is down, then Connection Leasing is used. In XenApp/XenDesktop 7.12 and newer, there’s Local Host Cache. However, the LHC in 7.12 has limitations: no non-persistent desktops, maximum of 5,000 VDAs, etc. Review the Docs article for details.
- Complexity – Zones do not reduce the number of servers that need to be built. And they increase complexity when configuring items in Citrix Studio.
- Zone Preference – to choose a VDA in a particular zone, your load balancer needs to include a special HTTP header (X-Citrix-ZonePreference) that indicates the zone name. This requires StoreFront 3.7, and XenApp/XenDesktop 7.11.
The alternative to zones is to build a separate site/farm in each datacenter, and use StoreFront to aggregate the published icons. Here are benefits of multiple sites/farms as compared to zones:
- Isolation – Each datacenter is isolated. If one datacenter is down, it does not affect any other datacenter.
- Versioning – Isolation lets you upgrade one datacenter before upgrading other datacenters. For example, you can test upgrades in a DR site before upgrading production.
- SQL High Availability – since each datacenter is a separate farm/site with separate databases, there is no need to stretch SQL across datacenters.
- Home Sites – StoreFront can prioritize different farms/sites for different user groups. No special HTTP header required.
Here are some general design suggestions for XenApp/XenDesktop in multiple datacenters:
- For multiple central datacenters, build a separate XenApp/XenDesktop farm in each datacenter. Use StoreFront to aggregate the icons from all farms. Use NetScaler GSLB to distribute users to StoreFront. This provides maximum flexibility with minimal dependencies across datacenters.
- For branch office datacenters, zones with Local Host Cache (7.12 and newer) is an option. Or each branch office can be a separate farm.
Create Zones – This section details how to create zones and put resources in those zones. In 7.9 and older, there’s no way to select a zone when connecting. In 7.11 and newer, NetScaler and StoreFront can now specify a zone and VDAs from that zone will be chosen. See Zone Preference for details.
- Zones at docs.citrix.com.
- Citrix Blog Post Deep Dive: XenApp and XenDesktop 7.7 Zones
- Citrix Blog Post Zones, Latency and Brokering Performance
There is no SQL in Satellite zones. Instead, Controllers in Satellite zones connect to SQL in Primary zone. Here are tested requirements for remote SQL connectivity. You can also set HKLM\Software\Citrix\DesktopServer\ThrottledRequestAddressMaxConcurrentTransactions to throttle launches at the Satellite zone.
From Mayunk Jain: “I guess we can summarize the guidance from this post as follows: the best practice guidance has been to recommend a datacenter for each continental area. A typical intra-continental latency is about 45ms. As these numbers show, in those conditions the system can handle 10,000 session launch requests in just under 20 minutes, at a concurrency rate of 36 requests.”
If Satellite zone loses connectivity to SQL, then the Connection Leasing feature kicks in. See docs.citrix.com Connection leasing and CTX205169 FAQ: Connection Leasing in XenApp/XenDesktop 7.6 for information on Connection Leasing limitations (e.g. no pooled virtual desktops, 2 week-old leases, etc.).
The following items can be moved into a satellite zone:
- Controllers – always leave two Controllers in the Primary zone. Add one or two Controllers to the Satellite zone.
- Hosting Connections – e.g. for vCenter in the satellite zone.
- Catalogs – any VDAs in satellite catalogs automatically register with Controllers in the same zone.
- NetScaler Gateway – requires StoreFront that understands zones (not available yet). StoreFront should be in satellite zone.
Do the following to create a zone and move items into the zone:
- In Citrix Studio 7.7 or newer, expand the Configuration node, and click Zones.
- If you upgraded from an older XenApp/XenDesktop and don’t see zones, then run the following commands:
cd 'C:\Program Files\Citrix\XenDesktopPoshSdk\Module\Citrix.XenDesktop.Admin.V1\Citrix.XenDesktop.Admin\StudioRoleConfig' Import-AdminRoleConfiguration –Path .\RoleConfigSigned.xml
- Right-click Zones, and click Create Zone.
- Give the zone a name. Note: Citrix supports a maximum of 10 zones.
- You can select objects for moving into the zone now, or just click Save.
- Select multiple objects, right-click them, and click Move Item.
- Select the new Satellite zone and click Yes.
- To assign users to the new zone, create a Delivery Group that contains machines from a Catalog that’s in the new zone. Zone Preference requires StoreFront 3.7 and XenApp/XenDesktop 7.11.
- If your farm has multiple zones, when creating a hosting connection, you’ll be prompted to select a zone.
- If your farm has multiple zones, when creating a Manual catalog, you’ll be prompted to select a zone.
- MCS catalogs are put in a zone based on the zone assigned to the Hosting Connection.
- The Provisioning Services XenDesktop Setup Wizard ignores zones so you’ll have to move the PvS Machine Catalog manually.
- New Controllers are always added to the Primary zone. Move it manually.
XenApp/XenDesktop 7.11 adds Zone Preference, which means NetScaler (11.0 build 65 and newer) and StoreFront (3.7 and newer) can request XenDesktop Controller to provide a VDA in a specific zone.
To configure zone preference:
- Create separate Catalogs in separate zones, and add the machines to a single Delivery Group.
- You can add users to one zone by right-clicking the zone, and clicking Add Users to Zone. If there are no available VDAs in that preferred zone, then VDAs are chosen from any other zone.
- Note: a user can only belong to one zone.
- You can delete users from a zone, or move users to a different zone.
- If you edit the Delivery Group, on the Users page, you can specify that Sessions must launch in a user’s home zone.
- For published apps, on the Zone page, you can configure it to ignore the user’s home zone.
- You can also configure a published app with a preferred zone, and force it to only use VDAs in that zone. If you don’t check the box, and if no VDAs are available in the preferred zone, then VDAs can be selected from any other zone.
- Or you can right-click on a zone, and configure multiple Applications to use that zone as preferred.
- NetScaler can specify the desired zone by inserting the X-Citrix-ZonePreference header into the HTTP request to the StoreFront 3.7 server. This header can contain up to 3 zones. The first Zone in the header is the preferred Zone, and the next 2 are randomised such as EMEA,US,APAC or EMEA,APAC,US. StoreFront 3.7 will then forward the zone names to Delivery Controller 7.11, which will select a VDA in the desired zone. This functionality can be combined with GSLB as detailed in the 29 page document Global Server Load Balancing (GSLB) Powered Zone Preference. Note: only StoreFront 3.7 and newer will send the zone name to the Delivery Controller.
- Delivery Controller entries in StoreFront can be split into different entries for different zones. Create a separate Delivery Controller entry for each zone, and associate a zone name with each. StoreFront uses the X-Citrix-ZonePreference header to select the Delivery Controller entry so the XML request is sent to the Controllers in the same zone. HDX Optimal Gateways can also be associated to zoned Delivery Controller entries. See The difference between a farm and a zone when defining optimal gateway mappings for a store at Citrix Docs.
Machine Creation Services
CTP Aaron Parker Machine Creation Services Capacity Sizing on Hyper-V details storage sizing for the following:
- Delta Clones (aka linked clones) – Master Image, AppDisks, Personal vDisks, and other Hyper-V files
- Delta Clones with Storage Optimization (aka MCS Memory Caching)
- Full Clones
MCS – Full Clones
In XenApp/XenDesktop 7.9 and earlier, Persistent Linked Clones are created by selecting Yes, create a dedicated virtual machine in the Create Catalog wizard. Please, never do this in 7.9 or earlier, since you can’t move the machines once they’re created. A much better option is to use vCenter to do Full Clones of a template Virtual Machine. Then when creating a Catalog, select Another service or technology to add the VMs that have already been built.
In XenApp/XenDesktop 7.11 and newer, you can create MCS Full Clones. Full Clones are a full copy of a template virtual machine. The Full Clone can then be moved to a different datastore (including Storage vMotion), different cluster, or even different vCenter. You can’t do that with Linked Clones.
For Full Clones, simply prepare a Master Image like normal. There are no special requirements. There’s no need to create Customization Specifications in vCenter since Sysprep is not used. Instead, MCS uses it’s identity technology to change the identity of the full clone. That means every full clone has two disks: one for the actual VM, and one for identity (machine name, machine password, etc).
During creation of a Full Clones Catalog, MCS still creates the master snapshot replica and ImagePrep machine, just like any other linked clone Catalog. The snapshot replica is then copied to create the Full Clones.
In 7.11 and newer, during the Create Catalog wizard, if you select Yes, create a dedicated virtual machine:
After you select the master image, there’s a new option for Use full copy for better data recovery and migration support. This is the option you want. The Use fast clone option is the older, not recommended, option.
Since these are Full Clones, once they are created, you can do things like Storage vMotion.
During Disaster Recovery, restore the VM (both disks). You might have to remove any Custom Attributes on the machine, especially the XdConfig attribute.
Inside the virtual machines, you might have to change the ListOfDDCs registry value to point to your DR Delivery Controllers. One method is to use Group Policy Preferences Registry.
In the Create Catalog wizard, select Another Service or technology.
And use the Add VMs button to add the Full Clone machines. The remaining Catalog and Delivery Group steps are performed normally.
MCS – Machine Naming
Once a Catalog is created, you can run the following commands to specify the starting count:
Set-AcctIdentityPool -IdentityPoolName "NAME" -StartCount VALUE
MCS – Memory Caching (XenApp/XenDesktop 7.9 and newer)
Memory caching in MCS is very similar to Memory caching in PvS. All writes are cached to memory instead of written to disk. With memory caching, some benchmarks show 95% reduction in IOPS. Here are some notes:
- You configure a size for the memory cache. If the memory cache is full, it overflows to a cache disk.
- Whatever memory is allocated to the MCS memory cache is no longer available for normal Windows operations, so make sure you increase the amount of memory assigned to each virtual machine.
- The overflow disk (temporary data disk) can be stored on shared storage, or on storage local to each hypervisor host. Since memory caching dramatically reduces IOPS, there shouldn’t be any problem placing these overflow disks on shared storage. If you put the overflow disks on hypervisor local disks then you won’t be able to vMotion the machines.
- The overflow disk is uninitialized and unformatted. Don’t touch it. Don’t format it.
- For a good overview of the feature, see Citrix Blog Post Introducing MCS Storage Optimization
- Andrew Morgan Everything you need to know about the new Citrix MCS IO acceleration details the performance counters that show memory cache and disk cache usage.
Memory caching requirements:
- XenApp/XenDesktop 7.9, VDA 7.9, and newer
- Random Catalogs only (no dedicated Catalogs)
Studio needs to be configured to place the temporary overflow disks on a datastore. You can configure this datastore when creating a new Hosting Resource, or you can edit an existing Hosting Resource.
To create a new Hosting Resource:
- In Studio, go to Configuration > Hosting, and click the link to Add Connection and Resources.
- In the Storage Management page, select shared storage.
- You can optionally select Optimize temporary data on local storage, but this might prevent vMotion. The temporary data disk is only accessed if the memory cache is full, so placing the temporary disks on shared storage shouldn’t be a concern.
- Select a shared datastore for each type of disk.
Or you can edit an existing Hosting Resource:
- In Studio, go to Configuration > Hosting, right-click an existing resource, and click Edit Storage.
- On the Temporary Storage page, select a shared datastore for the temporary overflow disks.
Memory caching is enabled when creating a new Catalog. You can’t enable it on existing Catalogs. Also, no AppDisks.
- For virtual desktops, in the Desktop Experience page, select random.
- Master Image VDA must be 7.9 or newer.
- In the Virtual Machines page, allocate some memory to the cache. For virtual desktops, 256 MB is typical. For RDSH, 4096 MB is typical. More memory = less IOPS.
- Whatever you enter for cache memory, also add it to the Total memory on each machine.
- Once the machines are created, add them to a Delivery Group like normal.
- The temporary overflow disk is not initialized or formatted. From Martin Rowan at discussions.citrix.com: “Don’t format it, the raw disk is what MCS caching uses.”
MCS – Image Prep
From Citrix Discussions: When a Machine Creation Services catalog is created or updated, a snapshot of the master image is copied to each LUN. This Replica is then powered on and a few tasks are performed like KMS rearm and Personal vDisk enabling.
From Citrix Blog Post Machine Creation Service: Image Preparation Overview and Fault-Finding and CTX217456 Updating a Catalog Fails During Image Preparation: if you are creating a new Catalog, here are some PowerShell commands to control what Image Prep does: (run
asnp citrix.* first)
Set-ProvServiceConfigurationData -Name ImageManagementPrep_Excluded_Steps -Value EnableDHCP
Set-ProvServiceConfigurationData -Name ImageManagementPrep_Excluded_Steps -Value OsRearm
Set-ProvServiceConfigurationData -Name ImageManagementPrep_Excluded_Steps -Value OfficeRearm
Set-ProvServiceConfigurationData -Name ImageManagementPrep_Excluded_Steps -Value "OsRearm,OfficeRearm"
Set-ProvServiceConfigurationData -Name ImageManagementPrep_DoImagePreparation -Value $false
If you are troubleshooting an existing Catalog, here are some PowerShell commands to control what Image Prep does: (run
asnp citrix.* first)
Get-ProvScheme– Make a note of the “ProvisioningSchemeUid” associated with the catalog.
Set-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_Excluded_Steps -Value EnableDHCP
Set-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_Excluded_Steps -Value OsRearm
Set-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_Excluded_Steps -Value OfficeRearm
Set-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_DoImagePreparation -Value $false
If multiple excluded steps, separate them by commands:
To remove the excluded steps, run
Remove-ProvServiceConfigurationData -Name ImageManagementPrep_Excluded_Steps or
Remove-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_Excluded_Steps.
A common issue with Image Prep is Rearm. Instead of the commands shown above, you can set the following registry key on the master VDA to disable rearm. See Unable to create new catalog at Citrix Discussions.
- HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/Windows NT/CurrentVersion/SoftwareProtectionPlatform
- SkipRearm (DWORD) = 1
Mark DePalma at XA 7.6 Deployment Failure Error : Image Preparation Office Rearm Count Exceeded at Citrix Discussions had to increase the services timeout to fix the rearm issue:
- ServicesPipeTimeout (DWORD) = 180000
From Mark Syms at Citrix Discussions: You can add one (or both) of the following MultiSZ registry values
The values are expected to be an executable or script (PoSh or bat), returning 0 on success
Citrix CTX140734 Error: “Preparation of the Master VM Image failed” when Creating MCS Catalog in XenApp or XenDesktop: To troubleshoot image prep failures, do the following:
- In PowerShell on a Controller, for a new Catalog, run:
asnp citrix.* Set-ProvServiceConfigurationData -Name ImageManagementPrep_NoAutoShutdown -Value $True
- For an existing Catalog, run the following:
asnp citrix.* Get-ProvScheme Set-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_NoAutoShutdown -Value $True
- On the master image, set the DWORD registry value HKLM\Software\Citrix\MachineIdentityServiceAgent\LOGGING to 1
- If you now attempt catalog creation, an extra VM will be started; log into this VM (via the hypervisor console, it has no network access) and see if anything is obviously wrong (e.g. it’s bluescreened or something like that!). If it hasn’t there should be two log files called “image-prep.log” and “PvsVmAgentLog.txt” created in c:\ – scan these for any errors.
- When you’ve finished doing all this debugging, remember to run one of the following:
Remove-ProvServiceConfigurationData -Name ImageManagementPrep_NoAutoShutdown Remove-ProvServiceMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_NoAutoShutdown
MCS – Base Disk Deletion 💡
Every 6 hours, XenDesktop runs a task to delete unused base disks.
The Disk Reaper interval is configured using PowerShell. The default values are shown below:
Set-ProvServiceConfigurationData -Name DiskReaper_retryInterval -Value 0:6:0 | Out-Null Set-ProvServiceConfigurationData -Name DiskReader_heartbeatInterval -Value 0:1:0 | Out-Null
If the unused base disks are not deleting, then see MCS – Deleting basedisk from VM Storage at Citrix Discussions for troubleshooting steps.
Controller – Name Caching
George Spiers in Active Directory user computer name caching in XenDesktop explains how the Broker Service in XenDesktop Controller caches Active Directory user and computer names. The cache can be updated by running
Update-BrokerNameCache -Machines or
Update-BrokerNameCache -Users. Also see Update-BrokerNameCache at Citrix Docs.
Delivery Groups in 7.8 and newer
In XenApp/XenDesktop 7.8, when creating a Delivery Group, there are new options for publishing applications and publishing desktops.
On the Applications page of the Create Delivery Group wizard, From start menu reads icons from a machine in the Delivery Group and lets you select them. Manually lets you enter file path and other details manually. These are the same as in prior releases.
Existing is the new option. This lets you easily publish applications across multiple Delivery Groups.
You can also go to the Applications node, edit an existing application, change to the Groups tab, and publish the existing app across additional Delivery Groups.
Once multiple Delivery Groups are selected, you can prioritize them by clicking the Edit Priority button.
On the Desktops page of the Create Delivery Group wizard, you can now publish multiple desktops from a single Delivery Group. Each desktop can be named differently. And you can restrict access to the published desktop.
There doesn’t seem to be any way to publish a Desktop across multiple Delivery Groups.
It’s still not possible to publish apps and desktops across a subset of machines in a Delivery Group. But the new method of publishing apps across multiple Delivery Groups should make it easier to split your machines into multiple Delivery Groups.
Tags in XenApp/XenDesktop 7.12 and newer
In 7.12 and newer, you can assign tags to machines. Then you can publish apps and/or desktops to only those machines that have the tag. This means you can publish icons from a subset of the machines in the Delivery Group, just like you could in XenApp 6.5.
Tags also allow different machines to have different restart schedules.
- In Citrix Studio, find the machines you want to tag (e.g. double-click a Delivery Group). You can right-click one machine, or select multiple machines and right-click them. Then click Manage Tags.
- Click Create.
- Give the tag a name, and click OK. This tag could be assigned to multiple machines.
- After the tag is created, check the box next to the tag to assign it to these machines. Then click Save.
- Edit a Delivery Group that has published desktops. On the Desktops page, edit one of the desktops.
- You can use the Restrict launches to machines with tag checkbox and drop-down to filter the machines the desktop launches from. This allows you to create a new published desktop for every machine in the Delivery Group. In that case, each machine would have a different tag. Create a separate published desktop for each machine, and select one of the tags.
- When you create an Application Group, on the Delivery Groups page, there’s an optional checkbox to Restrict launches to machines with tag. Any apps in this app group only launch on machines that have the selected tag assigned. This lets you have common apps across all machines in the Delivery Group, plus one-off apps that might be on only a small number of machines in the Delivery Group. In that case, you’ll have one app group with no tag restrictions for the common apps. And a different app group with tag restriction for the one-off apps.
RDSH Scheduled Restart
If you create a Scheduled Restart inside Citrix Studio, it applies to every machine in the Delivery Group. Alternatively, you can use the 7.12 tags feature to allow different machines to have different restart schedules.
- Once an RDSH Delivery Group is created, you can right-click it and click Edit Delivery Group.
- The Restart Schedule page lets you schedule a restart of the session hosts.
- XenApp 7.7 and newer lets you send multiple notifications.
Or use a reboot script:
- Shaun Ritchie – XenDesktop 7 Rolling Reboot Script
- Dane Young – Citrix Chained Reboot Scripts, now supporting XenApp 5, 6, 6.5 and XenDesktop 7.0, 7.1, 7.5, and 7.6!
- Citrix Blog Post – XenApp 7.x Reboot Schedules
- Citrix Blog Post – XenApp & XenDesktop 7.x Server OS VDA Staggered Reboot Framework v2
- Citrix Blog Post – XenApp and XenDesktop 7.x Server OS VDA Staggered Reboot
From Configure session roaming at Citrix Docs: By default, users can only have one session. On XenApp 7.6 (experimental support) and XenApp 7.7+ (full support), you can configure SessionReconnection setting available via PowerShell. On any Server OS delivery group, run:
Set-BrokerEntitlementPolicyRule <Delivery Group Name> ‑SessionReconnection <Value>
Where <Value> can be:
- Always – This is the default and matches the behavior of a VDI session. Sessions always roam, regardless of client device.
- DisconnectedOnly – This reverts back to the XenApp 6.x and earlier behavior. Sessions may be roamed between client devices by first disconnecting them (or using Workspace Control) to explicitly roam them. However, active sessions are not stolen from another client device, and a new session is launched instead.
- SameEndpointOnly – This matches the behavior of the “ReconnectSame” registry setting in XenApp 6.x. Each user will get a unique session for each client device they use, and roaming between clients is completely disabled.
This will change the roaming behavior for desktop sessions. For app sessions, use:
Set-BrokerAppEntitlementPolicyRule <Delivery Group Name> ‑SessionReconnection <Value>
Static Catalog – Export/Import Machine Assignments
It is sometimes useful (e.g. DR) to export machine assignments from one Catalog/Delivery Group and import to another.
From Adil Dean at Exporting Dededicated VDI machine names and user names from catalog in Xendesktop 7.x at discussions.citrix.com:
Hopefully this is what you are after, it turns out you don’t actually need PowerShell as the functionality is built into the tool.
- In Studio, click Delivery Groups on the lefthand menu
- Right click Edit delivery group
- Select Machine allocation tab on the left
- Click Export list
- Select a file name > Click Save
- Create the new machine catalog
- Right click the delivery group > Click Edit
- Select Machine allocation tab on the left
- Click Import list..
- Select the list you exported in step 4
- Click Apply
Your clients will now have users re-assigned to machines.
Monitor the Number of Free Desktops
Sacha Thomet wrote a script at victim of a good reputation – Low free pooled XenDesktops that polls Director to determine the number of free desktops in a Delivery Group. If lower than the threshold, an email is sent.