Catalogs, Delivery Groups, Zones

Last Modified: May 8, 2017 @ 9:27 pm

Navigation

ūüí° = Recently Updated

 Persistent vs Non-persistent

VDA design – One of the tasks of a Citrix Architect is VDA design. There are many considerations, including the following:

  • Machine type – single user (virtual desktop), or multi-user (Remote Desktop Session Host). RDSH is more hardware efficient.
  • Machine operating system – Windows 7, Windows 10, Windows Server 2008 R2, Windows Server 2012 R2, Windows Server 2016
  • Machine persistence – persistent, non-persistent
  • Number of new machines – concurrent vs named-users
  • Machine provisioning¬†– full clones, Machine Creation Services (MCS), Provisioning Services (PvS)
  • Hardware for the new machines – hypervisor clusters, storage
  • How the machines are updated – SCCM, MCS, PvS, etc.
  • Application integration – locally installed, App-V, Layering, XenApp published, leave on local endpoint machine, cloud apps, etc.
  • User Profiles – roaming, mandatory, home directories
  • Group Policies – session lockdown, automation
  • Disaster Recovery – replication. VDAs running in a warm site. DR for profiles and home directories too.

Desktop Management in a Citrix environment – Some environments try to use Citrix to improve desktop management. Here are some desktop management aspects of Citrix that aren’t possible with distributed physical desktops:

  • Datacenter network speeds – The VDAs have high speed connectivity to the desktop management tools, which eliminates WAN bandwidth as a desktop management consideration. For example, you can use Microsoft App-V to stream apps to VDAs.
  • Non-persistence – Non-persistent VDAs revert at every reboot. To update non-persistent VDAs, simply update your master image.
  • Layering – The VDA VMs can be composed of multiple layers that are combined during machine boot, or when the user logs in. Citrix AppDisk and Unidesk are examples of this technology. A single layer can be shared by multiple VDAs. The layers are updated once, and all machines using the layer receive the updated layer at next boot/login.

Non-persistent VDAs – Probably the easiest of these desktop-management technologies to implement is non-persistence. However, there are several drawbacks to non-persistence:

  • Master Images must be designed – Which apps go on which master image? Do you install the same app on multiple master images?
    • How do you know which apps a user needs? – Most Citrix admins, and even desktop teams, don’t know every app that a user needs. You can use tools like Liquidware Labs or Lakeside Software to discover app usage, but it’s a very complicated process to find commonality across multiple users.
    • How are One-off apps handled? – If you have an app used by only a small number of users, do you add it to one of your master images? Do you create a new master image? Do you publish it from XenApp (double hop)? Do you stream it using App-V? Layering is another option.
    • Application Licensing – for licensed apps, do you install the licensed app into the master image and try to hide it from non-licensed users? Or do you create a new master image for the licensed users?
    • Patching multiple images – when a new OS patch needs to be deployed, you have to update every master image running that OS version. Thus Citrix admins usually try to limit the number of master images, which makes image design more complicated.
    • How do you manage an app that is installed on multiple master images? – Layering might help with this.
  • Who manages the master images? – Citrix admins? Desktop team? It’s unlikely that traditional desktop management tools (e.g. SCCM) will ever be completely removed from an enterprise environment, which means that master image management is an additional task that was not performed before.¬†Does the Citrix admin team have the staff to take on this responsibility? Would the desktop management team be willing to perform this new process?
    • Politically feasible? – Large enterprises usually have mature desktop management practices. Would this new process interfere with existing desktop management requirements?
    • Responsibility – if the Citrix admins are not maintaining the master images, and if a Catalog update causes user problems, who is responsible?
    • RDSH Apps are complicated – who is responsible for integrating apps into Remote Desktop Session Host (XenApp)? Does the desktop team have the skills to perform the additional RDSH testing?
  • Change Control – Longer Deployment Times – Any change to a master image would affect every machine/user using that image, thus dev/QA testing is recommended for every change, which slows down app update deployment. And once a change is made to the master, it doesn’t take effect until the user’s VDA is rebooted.
  • Roaming Profiles – some apps (e.g. Office) save user settings in user profiles. Since the machines are non-persistent, the profiles would be lost on every reboot unless roaming profiles are implemented. This adds a dependency on roaming profile configuration, and the roaming profile file share.
    • How is the Outlook OST file handled? – With Cloud Hosted Exchange, for best performance, Outlook needs to run in Cached Exchange mode. How is the large OST file roamed? One option is to use group policy to minimize the size of the OST file. Another is to purchase a 3rd party OST handling product like FSLogix.
  • IT Applications (e.g. antivirus) on non-persistent machines – Many IT apps (antivirus. asset mgmt, security, etc.) have special instructions to work on non-persistent machines. Search the vendor’s knowledgebase for VDI, non-persistent, Citrix, etc. Antivirus in particular has a huge impact on VDA performance. And the special instructions for non-persistent VDAs are in addition to normal antivirus configuration.
  • Connection Leasing does not support non-persistent virtual desktops – if the XenDesktop SQL database is down, Connection Leasing won’t help you. It’s not possible to connect to non-persistent virtual desktops until the XenDesktop SQL database connection is recovered.¬†This affects multi-datacenter designs.

Application Integration Technologies – Additional technologies can be used to overcome some of the drawbacks of non-persistent machines:

  • Microsoft App-V – this technology can dynamically stream apps to a non-persistent image. Different users get different apps. And the apps run in isolated bubbles. However:
    • App-V is an additional infrastructure that must be built and maintained.
    • App-V requires additional skills for the people packaging the apps, and the people troubleshooting the apps.
    • Since the apps are isolated, app interaction is configured manually.
    • Because of application isolation, not every app can run in App-V. Maybe 60-80% of apps might work. How do you handle apps that don’t work?
  • Layering – each application is a different layer (VHD file). The layering tool combines multiple layers into a single unified image. Layers are updated in one place, and all images using the layer are updated, which solves the issue of a single app in multiple images. Layering does not use application isolation, so almost 100% of apps should work with layering. Layers can be mounted dynamically based on who’s logging in. There’s also a persistent layer that lets users install apps, or admins can install one-off apps. Unidesk is probably the most feature rich of the layering products. However:
    • Unidesk is not free. Citrix AppDisk is free, but it’s features are very limited.
    • Unidesk is a separate infrastructure that¬†must be built and maintained. Citrix AppDisk is built into XenDesktop.
    • Somebody has to create the layers. This is extremely easy in Unidesk since you simply install the applications normally (no new skills to learn). However, it’s an additional task on top of normal desktop management packaging duties.

Persistent virtual desktops РAnother method of building VDAs is by creating full clone virtual desktops that are persistent. Each virtual desktop is managed separately using traditional desktop management tools. If your storage is an All Flash Array with inline deduplication and compression, then full clone persistent virtual desktops probably take no more disk space than non-persistent linked clones. (Note: persistent RDSH VDAs are not included in this section since RDSH user sessions are essentially non-persistent) Here are some advantages of full clone persistent virtual desktops as opposed to non-persistent VDAs:

  • Skills and Processes – No new skills to learn. No new desktop management processes. Use existing desktop management tools (e.g. SCCM). The existing desktop management team can manage the persistent virtual desktops, which reduces the workload of the Citrix admins.
  • One-off applications – If a user needs a one-off applications, simply install it on the user’s persistent desktop. The application can be user-installed, SCCM self-service installed, or administrator installed.
  • User Profile – Outlook’s OST file is no longer a concern since the user’s profile persists on the user’s virtual desktop. It’s not necessary to implement roaming profiles when using persistent virtual desktops. If you want a process to move a user profile from one persistent virtual desktop to another, how do you do it on physical desktops today?
  • API integration – a self-service portal can use VMware PowerCLI and Citrix’s PowerShell SDK to automatically create a new persistent virtual desktop for a user. Chargeback can also be implemented.
  • Offline XenDesktop SQL Database – if the Citrix XenDesktop SQL database is not reachable, then Citrix Connection Leasing can still broker sessions to persistent virtual desktops that have already been assigned to users. This is not possible with non-persistent virtual desktops.

Concurrent vs Named User – one advantage of non-persistent virtual desktops is that you only need enough virtual desktops to handle the concurrent user load. With persistent virtual desktops, you need a separate machine for each named user, whether that user is using it or not.

Disaster Recovery – for non-persistent VDAs, one option is to replicate the master images to the DR site, and then create a Catalog of machines either before the disaster, or after. If before the disaster, the VDAs will already be running and ready for connections; however, the master images are maintained separately in each datacenter.

Persistent virtual desktops have several disaster recovery options:

  • Immediately after the disaster, instruct the persistent users to connect to a pool of non-persistent machines.
  • In the DR site, create new persistent virtual desktops for the users. Users would then need to use SCCM or similar to reinstall their apps. Scripts can be used to backup the user’s profile and restore it on the DR desktop. This method is probably closest to how recovery is performed on physical desktops.
  • The persistent virtual desktops can be replicated and recovered in the DR site. When the machines are added to Citrix Studio in DR, each machine is assigned to specific users. This process is usually scripted.

Zones

Caveats –¬†Zones let you stretch a single XenApp/XenDesktop site/farm across multiple datacenters. However, note these caveats:

  • Studio – If all Delivery Controllers in the Primary Zone are down, then you can’t manage the farm/site. This is true even if SQL is up, and Delivery Controllers are available in Satellite Zones. It’s possible to designate an existing zone as the Primary Zone by running¬†Set-ConfigSite¬†-PrimaryZone <Zone>, where¬†<Zone> can be name, UID, or a Zone object.
  • Version/Upgrade – All Delivery Controllers in the site/farm must be the same version. During an upgrade, you must upgrade every Delivery Controller in every zone.
  • Offline database – In XenApp/XenDesktop 7.11 and older, there is no offline database option similar to XenApp 6.5’s Local Host Cache. If the database is down, then Connection Leasing is used. In XenApp/XenDesktop 7.12 and newer, there’s Local Host Cache. However, the LHC in 7.12 and newer has limitations: no non-persistent desktops, maximum of 5,000 VDAs,¬†has issues if Controller is rebooted, etc. Review the Docs article for details.¬† ūüí°
  • Complexity – Zones do not reduce the number of servers that need to be built. And they increase complexity when configuring items in Citrix Studio.
  • Zone Preference – to choose a VDA in a particular zone, your load balancer needs to include a special HTTP header (X-Citrix-ZonePreference) that indicates the zone name. This requires StoreFront 3.7, and XenApp/XenDesktop 7.11.

The alternative to zones is to build a separate site/farm in each datacenter, and use StoreFront to aggregate the published icons. Here are benefits of multiple sites/farms as compared to zones:

  • Isolation – Each datacenter is isolated. If one datacenter is down, it does not affect any other datacenter.
  • Versioning – Isolation lets you upgrade one datacenter before upgrading other datacenters. For example, you can test upgrades in a DR site before upgrading production.
  • SQL High Availability – since each datacenter is a separate farm/site with separate databases, there is no need to stretch SQL across datacenters.
  • Home Sites – StoreFront can prioritize different farms/sites for different user groups. No special HTTP header required.

Here are some general design suggestions for XenApp/XenDesktop in multiple datacenters:

  • For multiple central datacenters, build a separate XenApp/XenDesktop farm in each datacenter. Use StoreFront to aggregate the icons from all farms. Use NetScaler GSLB to distribute users to StoreFront. This provides maximum flexibility with minimal dependencies across datacenters.
  • For branch office datacenters, zones with Local Host Cache (7.12 and newer) is an option. Or each branch office can be a separate farm.

Create Zones – This section details how to create zones and put resources in those zones. In 7.9 and older, there’s no way to select a zone when connecting. In 7.11 and newer, NetScaler and StoreFront can now specify¬†a zone and VDAs from that zone will be chosen. See Zone Preference for details.

Citrix Links:

There is no SQL in Satellite zones. Instead, Controllers in Satellite zones connect to SQL in Primary zone. Here are tested requirements for remote SQL connectivity. You can also set HKLM\Software\Citrix\DesktopServer\ThrottledRequestAddressMaxConcurrentTransactions to throttle launches at the Satellite zone.

From Mayunk Jain: “I guess we can summarize the guidance from this post as follows: the best practice guidance has been to recommend a datacenter for each continental area. A typical intra-continental latency is about 45ms. As these numbers show, in those conditions the system can handle 10,000 session launch requests in just under 20 minutes, at a concurrency rate of 36 requests.”

If Satellite zone loses connectivity to SQL, then the Connection Leasing feature kicks in. See docs.citrix.com Connection leasing and CTX205169 FAQ: Connection Leasing in XenApp/XenDesktop 7.6 for information on Connection Leasing limitations (e.g. no pooled virtual desktops, 2 week-old leases, etc.).

The following items can be moved into a satellite zone:

  • Controllers ‚Äď always leave two Controllers in the Primary zone. Add one or two Controllers to the Satellite zone.
  • Hosting Connections ‚Äď e.g. for vCenter in the satellite zone.
  • Catalogs ‚Äď any VDAs in satellite catalogs automatically register with Controllers in the same zone.
  • NetScaler Gateway ‚Äď requires StoreFront that understands zones (not available yet). StoreFront should be in satellite zone.

Do the following to create a zone and move items into the zone:

  1. In Citrix Studio 7.7 or newer, expand the Configuration node, and click Zones.
  2. If you upgraded from an older XenApp/XenDesktop and don’t see zones, then run the following commands:
    cd 'C:\Program Files\Citrix\XenDesktopPoshSdk\Module\Citrix.XenDesktop.Admin.V1\Citrix.XenDesktop.Admin\StudioRoleConfig'
    
    Import-AdminRoleConfiguration ‚ÄďPath .\RoleConfigSigned.xml
    
  3. Right-click Zones, and click Create Zone.
  4. Give the zone a name. Note: Citrix supports a maximum of 10 zones.
  5. You can select objects for moving into the zone now, or just click Save.
  6. Select multiple objects, right-click them, and click Move Item.
  7. Select the new Satellite zone and click Yes.
  8. To assign users to the new zone, create a Delivery Group that contains machines from a Catalog that’s in the new zone. Zone Preference requires StoreFront 3.7 and XenApp/XenDesktop 7.11.
  9. If your farm has multiple zones, when creating a hosting connection, you’ll be prompted to select a zone.
  10. If your farm has multiple zones, when creating a Manual catalog, you’ll be prompted to select a zone.
  11. MCS catalogs are put in a zone based on the zone assigned to the Hosting Connection.
  12. The Provisioning Services XenDesktop Setup Wizard ignores zones so you’ll have to move the PvS Machine Catalog manually.
  13. New Controllers are always added to the Primary zone. Move it manually.

Zone Preference

XenApp/XenDesktop 7.11 adds Zone Preference, which means NetScaler (11.0 build 65 and newer) and StoreFront (3.7 and newer) can request XenDesktop Controller to provide a VDA in a specific zone.

Citrix Blog Post¬†Zone Preference Internals¬†details three methods of zone preference: Application Zone, User Zone, and NetScaler Zone.¬† ūüí°


To configure zone preference:

  1. Create separate Catalogs in separate zones, and add the machines to a single Delivery Group.
  2. You can add users to one zone by right-clicking the zone, and clicking Add Users to Zone. If there are no available VDAs in that preferred zone, then VDAs are chosen from any other zone.
  3. Note: a user can only belong to one home zone.
  4. You can delete users from a zone, or move users to a different zone.
  5. If you edit the Delivery Group, on the¬†Users page, you can specify that¬†Sessions must launch in a user’s home zone. If there are no VDAs in the user’s home zone, then the launch fails.
  6. For published apps, on the Zone¬†page,¬†you can configure it to ignore the user’s home zone.
  7. You can also configure a¬†published app with a preferred zone, and force it to only use VDAs in that zone. If you don’t check the box, and if no VDAs are available in the preferred¬†zone, then VDAs can be selected from any other zone.
  8. Or you can Add Applications to Zone, which allows you to add multiple Applications at once.

  9. NetScaler can specify the desired zone by inserting the X-Citrix-ZonePreference header into the HTTP request to the StoreFront 3.7 server. This header can contain up to 3 zones. The first Zone in the header is the preferred Zone, and the next 2 are randomised such as EMEA,US,APAC or EMEA,APAC,US. StoreFront 3.7 will then forward the zone names to Delivery Controller 7.11, which will select a VDA in the desired zone. This functionality can be combined with GSLB as detailed in the 29 page document Global Server Load Balancing (GSLB) Powered Zone Preference. Note: only StoreFront 3.7 and newer will send the zone name to the Delivery Controller.
  10. Delivery Controller entries in StoreFront can be split into different entries for different zones. Create a separate Delivery Controller entry for each zone, and associate a zone name with each. StoreFront uses the X-Citrix-ZonePreference header to select the Delivery Controller entry so the XML request is sent to the Controllers in the same zone. HDX Optimal Gateways can also be associated to zoned Delivery Controller entries. See The difference between a farm and a zone when defining optimal gateway mappings for a store at Citrix Docs.
  11. Citrix Blog Post¬†Zone Preference Internals indicates that there’s a preference order to zone selection.¬†The preference order can be changed.¬† ūüí°
    1. Application’s Zone
    2. User’s Home Zone
    3. The Zone specified by NetScaler in the X-Citrix-ZonePreference HTTP header sent to StoreFront.

Machine Creation Services

CTP Aaron Parker Machine Creation Services Capacity Sizing on Hyper-V details storage sizing for the following:

MCS – Full Clones

In XenApp/XenDesktop 7.9 and earlier, Persistent Linked Clones are created by selecting¬†Yes, create a dedicated virtual machine in the Create Catalog wizard. Please, never do this in 7.9 or earlier, since you can’t move the machines once they’re created. A much better option is to use vCenter to do Full Clones of a template Virtual Machine. Then when creating a Catalog, select¬†Another service or technology to add the VMs that have already been built.

In XenApp/XenDesktop 7.11 and newer, you can create MCS Full Clones. Full Clones are a full copy of a template virtual machine. The Full Clone can then be moved to a different datastore (including Storage vMotion), different cluster, or even different vCenter. You can’t do that with Linked Clones.

For Full Clones, simply prepare a Master Image like normal. There are no special requirements. There’s no need to create Customization Specifications in vCenter since¬†Sysprep is not used. Instead, MCS uses it’s identity technology to change the identity of the full clone. That means every full clone has two disks: one for the actual VM, and one for identity (machine name, machine password, etc).

During creation of a Full Clones Catalog, MCS still creates the master snapshot replica and ImagePrep machine, just like any other linked clone Catalog. The snapshot replica is then copied to create the Full Clones.

In 7.11 and newer, during the Create Catalog wizard, if you select Yes, create a dedicated virtual machine:

After you select the master image, there’s a new option for¬†Use full copy for better data recovery and migration support. This is the option you want. The¬†Use fast clone option is the older, not recommended, option.

Since these are Full Clones, once they are created, you can do things like Storage vMotion.

During Disaster Recovery, restore the VM (both disks). You might have to remove any Custom Attributes on the machine, especially the XdConfig attribute.

Inside the virtual machines, you might have to change the ListOfDDCs registry value to point to your DR Delivery Controllers. One method is to use Group Policy Preferences Registry.

In the Create Catalog wizard, select Another Service or technology.

And use the Add VMs button to add the Full Clone machines. The remaining Catalog and Delivery Group steps are performed normally.

MCS – Machine Naming

Once a Catalog is created, you can run the following commands to specify the starting count:

Get-AcctIdentityPool
Set-AcctIdentityPool -IdentityPoolName "NAME" -StartCount VALUE

MCS – Memory Caching (XenApp/XenDesktop 7.9 and newer)

Memory caching in MCS is very similar to Memory caching in PvS. All writes are cached to memory instead of written to disk. With memory caching, some benchmarks show 95% reduction in IOPS. Here are some notes:

  • You configure a size for the memory cache. If the memory cache is full, it overflows to a cache disk.
  • Whatever memory is allocated to the MCS memory cache is no longer available for normal Windows operations, so make sure you increase the amount of memory assigned to each virtual machine.
  • The overflow disk (temporary data disk) can be stored on shared storage, or on storage local to each hypervisor host. Since memory caching dramatically reduces IOPS, there shouldn’t be any problem placing these overflow disks on shared storage. If you put the overflow disks on hypervisor local disks then you won’t be able to vMotion the machines.
  • The overflow disk is uninitialized and unformatted. Don’t touch it. Don’t format it.
  • For¬†a good overview of the feature, see Citrix Blog Post¬†Introducing MCS Storage Optimization
  • Andrew Morgan¬†Everything you need to know about the new Citrix MCS IO acceleration¬†details the performance counters that show¬†memory cache and disk cache usage.

 

Memory caching requirements:

  • XenApp/XenDesktop 7.9, VDA 7.9, and newer
  • Random Catalogs¬†only (no dedicated Catalogs)

 

Studio needs to be configured to place the temporary overflow disks on a datastore. You can configure this datastore when creating a new Hosting Resource, or you can edit an existing Hosting Resource.

To create a new Hosting Resource:

  1. In Studio, go to Configuration > Hosting, and click the link to Add Connection and Resources.
  2. In the Storage Management page, select shared storage.
  3. You can optionally select¬†Optimize temporary data on local storage, but this might prevent vMotion. The temporary data disk¬†is only accessed if the memory cache is full, so placing the temporary disks on shared storage shouldn’t be a concern.
  4. Select a shared datastore for each type of disk.

Or you can edit an existing Hosting Resource:

  1. In Studio, go to Configuration > Hosting, right-click an existing resource, and click Edit Storage.
  2. On the Temporary Storage page, select a shared datastore for the temporary overflow disks.

Memory caching is enabled when creating a new Catalog. You can’t enable it on existing Catalogs. Also, no AppDisks.

  1. For virtual desktops, in the Desktop Experience page, select random.
  2. Master Image VDA must be 7.9 or newer.
  3. In the Virtual Machines page, allocate some memory to the cache. For virtual desktops, 256 MB is typical. For RDSH, 4096 MB is typical. More memory = less IOPS.
  4. Whatever you enter for cache memory, also add it to the Total memory on each machine.
  5. Once the machines are created, add them to a Delivery Group like normal.
  6. The temporary overflow disk is not initialized or formatted.¬†From Martin Rowan at discussions.citrix.com: “Don’t format it, the raw disk is what MCS caching uses.”

MCS – Image Prep

From Citrix Discussions: When a Machine Creation Services catalog is created or updated, a snapshot of the master image is copied to each LUN. This Replica is then powered on and a few tasks are performed like KMS rearm and Personal vDisk enabling.

 

From Citrix Blog Post Machine Creation Service: Image Preparation Overview and Fault-Finding and CTX217456 Updating a Catalog Fails During Image Preparation: if you are creating a new Catalog, here are some PowerShell commands to control what Image Prep does: (run asnp citrix.* first)

  • Set-ProvServiceConfigurationData -Name ImageManagementPrep_Excluded_Steps -Value EnableDHCP
  • Set-ProvServiceConfigurationData -Name ImageManagementPrep_Excluded_Steps -Value OsRearm
  • Set-ProvServiceConfigurationData -Name ImageManagementPrep_Excluded_Steps -Value OfficeRearm
  • Set-ProvServiceConfigurationData -Name ImageManagementPrep_Excluded_Steps -Value "OsRearm,OfficeRearm"
  • Set-ProvServiceConfigurationData -Name ImageManagementPrep_DoImagePreparation -Value $false

If you are troubleshooting an existing Catalog, here are some PowerShell commands to control what Image Prep does: (run asnp citrix.* first)

  • Get-ProvScheme –¬†Make a note of the “ProvisioningSchemeUid” associated with the catalog.
  • Set-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_Excluded_Steps -Value EnableDHCP
  • Set-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_Excluded_Steps -Value OsRearm
  • Set-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_Excluded_Steps -Value OfficeRearm
  • Set-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_DoImagePreparation -Value $false

If multiple excluded steps, separate them by commands: -Value "OsRearm,OfficeRearm"

To remove the excluded steps, run Remove-ProvServiceConfigurationData -Name ImageManagementPrep_Excluded_Steps or Remove-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_Excluded_Steps.

 

A common issue with Image Prep is Rearm. Instead of the commands shown above, you can set the following registry key on the master VDA to disable rearm. See Unable to create new catalog at Citrix Discussions.

  • HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/Windows NT/CurrentVersion/SoftwareProtectionPlatform
    • SkipRearm (DWORD) = 1

Mark DePalma at XA 7.6 Deployment Failure Error : Image Preparation Office Rearm Count Exceeded at Citrix Discussions had to increase the services timeout to fix the rearm issue:

  • HKLM\SYSTEM\CurrentControlSet\Control
    • ServicesPipeTimeout (DWORD) = 180000

 

From Mark Syms at Citrix Discussions: You can add one (or both) of the following MultiSZ registry values

  • HKLM\Software\Citrix\MachineIdentityServiceAgent\ImagePreparation\Before
  • HKLM\Software\Citrix\MachineIdentityServiceAgent\ImagePreparation\After

The values are expected to be an executable or script (PoSh or bat), returning 0 on success

 

Citrix CTX140734 Error: ‚ÄúPreparation of the Master VM Image failed‚ÄĚ when Creating MCS Catalog in XenApp or XenDesktop: To troubleshoot image prep failures, do the following:

  1. In PowerShell on a Controller, for a new Catalog, run:
    asnp citrix.*
    
    Set-ProvServiceConfigurationData -Name ImageManagementPrep_NoAutoShutdown -Value $True
    
  2. For an existing Catalog, run the following:
    asnp citrix.*
    Get-ProvScheme
    Set-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_NoAutoShutdown -Value $True
  3. On the master image, set the DWORD registry value HKLM\Software\Citrix\MachineIdentityServiceAgent\LOGGING to 1
  4. If you now attempt catalog creation, an extra VM will be started; log into this VM (via the hypervisor console, it has no network access) and see if anything is obviously wrong (e.g. it’s bluescreened or something like that!). If it hasn’t there should be two log files called “image-prep.log” and “PvsVmAgentLog.txt” created in c:\ – scan these for any errors.
  5. When you’ve finished doing all this debugging, remember to run one of the following:
    Remove-ProvServiceConfigurationData -Name ImageManagementPrep_NoAutoShutdown
    Remove-ProvSchemeMetadata -ProvisioningSchemeUid xxxxxxx -Name ImageManagementPrep_NoAutoShutdown

MCS – Base Disk Deletion

Citrix CTX223133 How to change the disk deletion interval to delete unused base disks on the VM storage. Every 6 hours, XenDesktop runs a task to delete unused base disks.

The Disk Reaper interval is configured using PowerShell. The default values are shown below:

Set-ProvServiceConfigurationData -Name DiskReaper_retryInterval -Value 0:6:0 | Out-Null
Set-ProvServiceConfigurationData -Name DiskReader_heartbeatInterval -Value 0:1:0 | Out-Null

If the unused base disks are not deleting, then see MCS РDeleting basedisk from VM Storage at Citrix Discussions for troubleshooting steps.

Controller – Name Caching

George Spiers in Active Directory user computer name caching in XenDesktop explains how the Broker Service in XenDesktop Controller caches Active Directory user and computer names. The cache can be updated by running Update-BrokerNameCache -Machines or Update-BrokerNameCache -Users. Also see Update-BrokerNameCache at Citrix Docs.

Delivery Groups in 7.8 and newer

In XenApp/XenDesktop 7.8, when creating a Delivery Group, there are new options for publishing applications and publishing desktops.

On the Applications page of the Create Delivery Group wizard, From start menu reads icons from a machine in the Delivery Group and lets you select them. Manually lets you enter file path and other details manually. These are the same as in prior releases.

Existing is the new option. This lets you easily publish applications across multiple Delivery Groups.

You can also go to the Applications node, edit an existing application, change to the Groups tab, and publish the existing app across additional Delivery Groups.

Once multiple Delivery Groups are selected, you can prioritize them by clicking the Edit Priority button.

On the Desktops page of the Create Delivery Group wizard, you can now publish multiple desktops from a single Delivery Group. Each desktop can be named differently. And you can restrict access to the published desktop.

There doesn’t seem to be any way to publish a Desktop across multiple Delivery Groups.

It’s still not possible to publish apps and desktops across a subset of machines in a¬†Delivery Group. But the new method of publishing apps across multiple Delivery Groups should make it easier to split your machines into multiple Delivery Groups.

Tags in XenApp/XenDesktop 7.12 and newer

In 7.12 and newer, you can assign tags to machines. Then you can publish apps and/or desktops to only those machines that have the tag. This means you can publish icons from a subset of the machines in the Delivery Group, just like you could in XenApp 6.5.

Tags also allow different machines to have different restart schedules.

  1. In Citrix Studio, find the machines you want to tag (e.g. double-click a Delivery Group). You can right-click one machine, or select multiple machines and right-click them. Then click Manage Tags.
  2. Click Create.
  3. Give the tag a name, and click OK. This tag could be assigned to multiple machines.
  4. After the tag is created, check the box next to the tag to assign it to these machines. Then click Save.
  5. Edit a Delivery Group that has published desktops. On the Desktops page, edit one of the desktops.
  6. You can use the Restrict launches to machines with tag checkbox and drop-down to filter the machines the desktop launches from. This allows you to create a new published desktop for every machine in the Delivery Group. In that case, each machine would have a different tag. Create a separate published desktop for each machine, and select one of the tags.
  7. A common request is to create a published desktop for each XenApp server. See Citrix Blog Post¬†How to Assign Desktops to Specific Servers in XenApp 7 for a script that can automate this configuration.¬† ūüí°
  8. When you create an Application Group, on the¬†Delivery Groups page, there’s an optional checkbox to¬†Restrict launches to machines with tag. Any apps in this app group only launch on machines that have the selected tag assigned. This lets you have common apps across all machines in the Delivery Group, plus one-off apps that might be on only a small number of machines in the Delivery Group. In that case, you’ll have one app group with no tag restrictions for the common apps. And a different app group with tag restriction for the one-off apps.

RDSH Scheduled Restart

If you create a Scheduled Restart inside Citrix Studio, it applies to every machine in the Delivery Group. Alternatively, you can use the 7.12 tags feature to allow different machines to have different restart schedules.

  1. Once an RDSH Delivery Group is created, you can right-click it and click Edit Delivery Group.
  2. The Restart Schedule page lets you schedule a restart of the session hosts.
  3. XenApp 7.7 and newer lets you send multiple notifications.

 

Or use a reboot script:

Multiple Sessions

From Configure session roaming at Citrix Docs: By default, users can only have one session. On XenApp 7.6 (experimental support) and XenApp 7.7+ (full support), you can configure SessionReconnection setting available via PowerShell.  On any Server OS delivery group, run:

Set-BrokerEntitlementPolicyRule <Delivery Group Name> ?SessionReconnection <Value>

Where <Value> can be:

  • Always – This is the default and matches the behavior of a VDI session.¬† Sessions always roam, regardless of client device.
  • DisconnectedOnly – This reverts back to the XenApp 6.x and earlier behavior.¬† Sessions may be roamed between client devices by first disconnecting them (or using Workspace Control) to explicitly roam them.¬† However, active sessions are not stolen from another client device, and a new session is launched instead.
  • SameEndpointOnly – This matches the behavior of the “ReconnectSame” registry setting in XenApp 6.x.¬† Each user will get a unique session for each client device they use, and roaming between clients is completely disabled.

This will change the roaming behavior for desktop sessions.  For app sessions, use:

Set-BrokerAppEntitlementPolicyRule <Delivery Group Name> ?SessionReconnection <Value>

Static Catalog – Export/Import Machine Assignments

It is sometimes useful (e.g. DR) to export machine assignments from one Catalog/Delivery Group and import to another.

From¬†Adil Dean at Exporting Dededicated VDI machine names and user names from catalog in Xendesktop 7.x at Citrix Discussions:¬†Hopefully this is what you are after, it turns out you don’t actually need PowerShell as the functionality is built into the tool.

  1. In Studio, click Delivery Groups on the lefthand menu
  2. Right click Edit delivery group
  3. Select Machine allocation tab on the left
  4. Click Export list
  5. Select a file name > Click Save
  6. Create the new machine catalog
  7. Right click the delivery group > Click Edit
  8. Select Machine allocation tab on the left
  9. Click Import list..
  10. Select the list you exported in step 4
  11. Click Apply

Your clients will now have users re-assigned to machines.

Shane O’Neill produced an export utility that can be scheduled to run periodically. See¬†XenDesktop Farm Migration Utility Update ‚Äď Version 1.2.¬† ūüí°

Monitor the Number of Free Desktops

Sacha Thomet wrote a script at victim of a good reputation ‚Äď Low free pooled XenDesktops that polls Director to determine the number of free desktops in a Delivery Group. If lower than the threshold, an email is sent.

Related Topics

107 thoughts on “Catalogs, Delivery Groups, Zones”

  1. Hi Carl,

    I have used machine creation service to create Citrix VDA from master image, can i use the same Citrix master image to create a new machine catalog ?

    Best Regards,
    Basem

        1. I see Disk Cache size option is grad out during the new catalog creation ? how can i enable it ?

          Thank you in advance.

          1. You are creating non-persistent machines? Is memory caching enabled on the same page?

            Or maybe you didn’t specify a disk cache storage location. In Studio, go to Configuration > Hosting. Edit a hosting resource and specify the storage location for the disk cache.

  2. Hello Carl,

    from XenDesktop 7.6 with Windows7 to 7.13 with Windows10 we always come across the problem that sporadically the Windows group policies are not applied on the machine startup, using MCS as provisioning method.

    We use some startup scripts (e.g. for user profile disk) as well as gpo loopback processing, so proper startup group policy processing essential.

    To remedy this behavior we already set some local policies in the master image, like “always wait for the network at computer startup and logon” or “specify startup policy processing wait time”.
    This reduces the occurrences, but does not completely resolves the issue.

    Do you have any tips how to fix this problem?

    Best regards

    1. I’ve had this issue in the past also. Also set ‘Configure Logon Script Delay’ to 0. The other thing I’ve been using in my current MCS implementations is the BISF frame work. It has a module that processes group policy after everything is connected along with other cleanup and optimization items. I rely heavily on GPO and haven’t had any issues recently.

  3. Carl,

    The Full clones option in Studio only appears for XenDeskop side. I assume link clones are still used on the RDSH side of things for the xa 7.11 and newer? Unless you clone them and bring them in as Full Clones ( AKA other Service or Technology option) Which will bring it in as a full clone.

  4. Nice, docs, small typo :

    MCS ‚Äď Image Prep -> 5. should be Remove-ProvSchemeMetadata not Remove-ProvServiceMetadata

  5. I think the commands in your MCS – Image Prep section changed from Set-ProvServiceMetadata to Set-ProvSchemeMetadata. As the -ProvisioningSchemeUid is no longer available with Set-ProvServiceMetadata

  6. UFEI support with MCS and Gen2VMs

    Any idea when MCS will support Gen2VMs? I tried setting the connection option in Studio to the EnumerateGen2VMs=True, however it doesnt seem to enumerate the gen2 machines. I rebooted the VDA several times as well as the controller but cant get these machines to enumerate

      1. Yes. I entered it in the advanced properties of the host connection (in studio -> hosting -> edit connection -> advanced), however still not able to get these gen2 machines to enumerate. Do you know if it is supported in XA7.11? thats the version of XA that we are using to test with. Thanks Carl

  7. Does Zone Preferences also work without Netscaler ica proxy connections ? Is this a replacement for what we known in XenApp 6.5 as Load Balancing Polcies ? Iam looking for a way to puplish some apps only once in my site and if a user is a member of a specific Group, Word on the Sattelite XenApp Server will open, instead in the Headquarter. That was supereasy to implement with LB Policies.

      1. So there are 3 forms of ZonePreference supported by Netscaler 11.x, StoreFront and XenDesktop 7.x. The determinant which controls which VDAs to use to launch apps and desktops is different for all 3 methods.

        Client IP or Client DNS Server IP Based:
        This requires Netscaler 11.x , StoreFront and XenDesktop 7.x to have consistent Zone configuration and works best with StoreFront aggregation also configured.
        This uses GSLB static proximity to determine the location of the client device based on its own IP or the IP of the DNS server it used to resolve a GSLB domain name to a particular service. The GSLB services can be internal StoreFront load balancers or external GSLB Gateways for remote access. the policy to inject the Zone X-Citrix-ZonePreference header can be bound to either LB vServers or VPN vServers within netscaler. I have configured this within Citrix in our production RTST environment. This is the most dynamic Zone Preference method as it supports roaming users who visit different geos frequently but is also by far the most complicated to set up.

        User Group Based using Active Directory Groups:
        XenDesktop 7.x Only.
        This maps VDAs in different zones and geos to particular AD user groups. UKUsers have their apps delivered by UK VDAs and USUsers are directed to US VDAs for a better user experience once their HDX session is established.

        App Based:
        XenDesktop 7.x Only
        This is defined at App level on the XenDesktop 7.x delivery groups and it can be used if a particular app is only available from VDAs in 1 geo and may be absent in the other geos.

  8. Hey Carl,

    We’re looking to setup a multi-tenant unified gateway for our clients. While we already have the multi-tenant design configured and working (2 DDC’s and 2 LB SF’s in a central resource forest with VDA’s in client forests with a 2 way trust between them) we’d like to get HA failover between data centers in 2 geographic locations with 2 separate Citrix sites.

    From my understanding, we need to utilize the application aggregation features on the SF’s. We’re testing this out currently but have you seen any “gotcha’s” in your experience when setting this up?

    Your guides and explanations are always the most helpful resources and I thank you very much for all your hard work!

  9. I keep seeing “Unavailable capacity” on application Server VDAs.
    Some of the Servers keep shutting down despite adjusting the off peak and buffer settings to max

    Set-BrokerDesktopGroup -name “XXX” -OffPeakBufferSizePercent 100 -PeakBufferSizePercent 100

    OffPeakBufferSizePercent : 100
    OffPeakDisconnectAction : Nothing
    OffPeakDisconnectTimeout : 0
    OffPeakExtendedDisconnectAction : Nothing
    OffPeakExtendedDisconnectTimeout : 0
    OffPeakLogOffAction : Nothing
    OffPeakLogOffTimeout : 0
    PeakBufferSizePercent : 100
    PeakDisconnectAction : Nothing
    PeakDisconnectTimeout : 0
    PeakExtendedDisconnectAction : Nothing
    PeakExtendedDisconnectTimeout : 0
    PeakLogOffAction : Nothing
    PeakLogOffTimeout : 0
    ProtocolPriority : {}

    Any advice on how I can make the servers stay on and available?

  10. Carl Unfortunately I get this error when I run

    Set-BrokerEntitlementPolicyRule EDISKIOSK -SessionReconnection SameEndpointOnly

    I ran this command for another delivery group few months ago and it worked great but now I have no clue why this is happening. Running Get-BrokerEntitlementPolicyRule does not return anythng

    Set-BrokerEntitlementPolicyRule : No items match the supplied pattern At line:1 char:1 + Set-BrokerEntitlementPolicyRule

    Thanks always for your help.

      1. Hey Carl, Thanks for responding. I finally found why. Interestingly, when I ran powershell from windows command line, and ran the same command, it executed without any errors. Both ran as admin, but one gave an error exception but the other ran fine.

  11. Carl,

    I’m trying to use XA 7.8 App-V AppLibrary integration without Mgmt/Publishing servers. I was hoping that if I have a new version of an application and I upgrade that App-V package, then I import it,that it was automatically start launching that new version. However, so far it seems like I have to reboot the PVS target devices in my Delivery Group. I don’t really see a benefit besides a cleaner golden image if I can’t upgrade an application without rebooting at times.

    Maybe I’m doing something wrong? Or maybe there are enhancements in 7.9 or 7.11.

    Thoughts?

  12. Hi Carl,

    I have a question on MCS Memory Caching.

    When creating a new Machine Catalog, I see the option to configure the memory caching.

    Once the catalog has been created, how do I to remove or re-configure the caching values?

  13. Carl, just verifying that under Machine Catalog Setup, I would select Server OS to create Xenapp servers, and would not use the Desktop OS selection, and the new options like full clone? Thanks.

  14. Carl, we are looking at MCS to create our Xenapp servers. It seems most of the options at the start of this document are for desktops, so what would you recommend for servers, persistent – non persistent, don’t see full clones as a server option, etc. Thanks.

    1. Since RDSH is a shared system, there’s typically no reason for persistent. Even if you want persistent so they can be updated by an external ESD, it’s difficult to update them while they are being used. Thus, RDSH is almost always built as PvS or MCS Catalogs. One exception I’ve seen is an application that can only store data on the local RDSH server, but this is typically only used by a small number of users.

  15. Hi Carl

    It might also be worth highlighting that delivery controllers in Storefront need to be tagged with their respective zones in their advanced properties. This is necessary if the admin wishes Storefront to prioritise a zones’s delivery controllers first within an aggregation group and send the user to a local resource rather than a remote data center. Without a ZonePreference header, Storefront will randomly choose a delivery controller from the list of equivalents in the aggregation group. More often than not, this results in an app or desktop launch from the wrong zone which is NOT closest to the client device. If Storefront receives the X-Citrix-ZonePreference header it reorders the list of available delivery controllers in the aggregation group and places the preferred zone’s delivery controllers first in its launch list.

    Mark

  16. Hi Carl

    Netscaler 11.1 can dynamically insert a list of preferred zones using a ZonePreference LB vServer. Each of its services represent a Zone. Each Zone should have monitors attached which probe the DDCs within it and can determine if a Zone is up or down. Create a monitor for each DDC and set a monitoring threshold of at least 1 controller per Zone and if at least 1 of the monitored DDCs is available, then the preferred Zone will be injected into the X-Citrix-ZonePrerence header. Up to 3 Zones can be injected into X-Citrix-ZonePreference. The policy can be bound to either LB vServers, or Gateway vServers when using GSLB or manually selecting a gatewayin the correct geolocation. If using GSLB and static proximity, the client IP or client DNS server can be used to determine the preferred Zone. The first Zone in the header is the preferred Zone and the next 2 are randomised such as EMEA,US,APAC or EMEA,APAC,US.

    I can send details of how to configure this in Netscaler if you wish?

    Regards Mark @ Citrix Storefront Test & RTST

  17. We recently upgraded to 7.11 and want to build non linked Full Clones. During Machine Catalog creation, under Desktop Experience I can select “I want users to connect to the same (static) desktop each time they log on, but I do not get the “Do you want to save any changes that the user makes to the desktop?” Options. Do you know why this may be? Do I need to enable something on the DDC’s to have this functionality?

      1. No, I seemed to have misread your guide. Although when I do select a hosting resource, when I get to “Virtual Machines” I don’t have the “Select a virtual machine copy mode.” We are using a Nutanix AHV backend through the Nutanix Studio Plugin, perhaps its a limitation of the Nutanix plugin.

          1. We also don’t have the “Select a virtual machine copy mode” option. We are using Xenapp 7.11 MCS with Vsphere 5.5.

  18. Carl,

    I actually found that using ‘Set-ProvServiceConfigurationData -Name ImageManagementPrep_ExcludedSteps’ does not work. Using ‘Set-ProvServiceConfigurationData -Name ImageManagementPrep_Excluded_Steps’ works for me.

    -Mark

  19. Hi,

    One of my MCS servers computer object got deleted from AD and now MCS server showing unregistered. Is it possible to remove the machine from domain and rejoin? or do i need to create new machine from base image?

    BR

  20. Hi Carl,

    you mentioned the following
    During Disaster Recovery, restore the VM (both disks). You might have to remove any Custom Attributes on the machine, especially the XdConfig attribute.

    Now, This will be a huge manual effort and will require a significant time if we are talking for 5000+ VDI or can we automate it.

    Just wanted to understand can we do the following with full clone

    create VMs in both DC and no storage is attached to the DR VMs. In the event of a failure, these VMs disk will be assigned from the backup storage location. the storage for each VM will be made available from the latest storage backup of the dedicated VMs and to be attached individually to the pre-created VMs which is already present.

    Please let me know your inputs.

    Kind Regards,
    Nivesh Pankaj

    1. They’re just regular VMs so you can do anything that normally works for a VM.

      As for the attribute, there might be a PowerCLI option. Let me do more research.

  21. Carl,

    Setting up a new XA 7.11 site. I’ve been through XA 6.5, XD 5.6, XD 7.6, and now XA 7.11. This time when I go to create an MCS-based machine catalog I don’t see any option to specify CPU (this has changed). What is the recommended way to set your socket/core count for the catalog? I also don’t see networking anymore? If everything is setup in the base image will it properly propagate?

    -Mark

    1. Networking comes from the Hosting Resources.

      I think CPU was intentionally removed. Can’t remember which version. But it should copy from the master.

  22. Hello Carl,

    I have XenDesktop 7.8 setup. Some of my MCS servers are moved to different vCenter but on same Cluster, Storage and Virtual network and now power status is unknown. Only vCenter server is changed.

    I have two hosting connections for those two vCenter (vCenter1 and vCenter2). The vCenter1 is the previous vCenter on which MCS servers were hosted. On vCenter 1 I can see Resources connection but not showing storage as it is moved to vCenter2.

    On vCenter2, I don’t have any resources connection set up yet but have VDI’s and Servers hosted manually on it.

    I am thinking to rename Resource on vCenter1 and then create new Resource with the same name (same cluster, storage and virtual network) on vCenter 2. vCenter2 hosting connection was created with other tools (not with Studio tools Machine Creation Service). Can I create Resources connection for MCS servers under same existing connection or should I create new hosting connection with vCenter 2 for MCS servers?

    Is it possible to update the MCS servers hosting connection details with Powershell commands after creating new hosting connection-resource and get power status?

    Regards

    1. If the UUIDs change in vCenter then it probably won’t work.

      To move MCS, you basically have to rebuild them. If Dedicated MCS, you can clone them to Full Clones, then add them to a Manual Catalog.

      1. I get following entries which looks same.

        Get-BrokerMachine -PowerState Unknown –HostedMachineId : 422e2e47-6da9-69b1-cb74-0ff0b1427bad
        Vmx file entry for MCS server — uuid.bios = “42 2e 2e 47 6d a9 69 b1-cb 74 0f f0 b1 42 7b ad”

        Can I update it with Set-BrokerMachine -MachineName ‘MyDomain\MyMachine’ -HostedMachineId

        Also please share your thoughts on below point.

        I am thinking to rename Resource on vCenter1 and then create new Resource with the same name (same cluster, storage and virtual network) on vCenter 2. vCenter2 hosting connection was created with other tools (not with Studio tools Machine Creation Service). Can I create Resources connection for MCS servers under same existing connection or should I create new hosting connection with vCenter 2 for MCS servers?

        Regards.

        1. If you look inside the .vmdk file, it has a pointer to the parent image. If the path doesn’t change, then it might work.

          Yes, you can add Hosting Resource to an existing Connection.

          1. My base image is also moved from vCenter 1 to vCenter 2 on same cluster, storage and virtual network where MCS app servers are.

            Is it the right command to update -> Set-BrokerMachine -MachineName ‚Äėxyz\Server1‚Äô -HostedMachineId 422e2e47-6da9-69b1-cb74-0ff0b1427bad

          2. The connection will work, but the machines probably won’t. MCS has many links and dependencies and can’t be easily moved. You usually have to recreate them.

          3. In this case i need to rebuild MCS servers. Please check below steps. Is there anything I m missing?

            * Create new hosting connection and resources
            * Create new Machine Catalog with a single server using the existing base image snapshot
            * Add the newly created server to the existing Delivery Group
            * Put existing servers in maintenance mode and check apps on new server
            * Add additional servers to new Machine Catalog
            * Add new servers to existing Delivery Group
            * Remove old servers from old machine catalog and delete old catalog

      2. Hello Carl,
        I have a similar question, I have a XennApp 7.8 environment, and I need to migrate it to a new vCenter. I already have a hosting connection setup for the new vCenter with same Network and Datastores, and I want to create a new machine catalog with same base image-template in this new vCenter. Can I move the ESX host with the base image-template to the new vCenter? can this action break the existing machine catalog? also will I be able delete the machine catalog later if I moved the image-template to the new vCenter?

        1. Moving ESX won’t change anything, assuming you still have hosts on the old vCenter. MCS does not need the master image for normal operations. It’s only needed during catalog updates.

          1. Thanks Carl,
            Yes, I have the master image on the old VCenter (I still have some ESXi hosts on it). I will move the master image from old vCenter to new vCenter with one of those hosts, then I will create a new machine catalog

  23. Question. I would like to use zones to setup a single site (two physical sites) XenApp environment. They are connected by a 100mb WAN connection and the sattlite site would be a DR site with its own hypervisor, controllers, Netscalers, and server VDAs. I don’t see how connection leasing would work as users would only connect to DR in the event of a failover. Would I be able to use SQL mirroring/Always On in conjunction with a primary/satellite zone model? I don’t really see any reason why this wouldn’t work, but I can’t find anything saying you can do this.

    1. I still prefer two separate farms until Citrix adds zone preference / failover and true offline database. No SQL DR required. StoreFront controls the connections.

  24. Is there a way to span subnets within a single machine catalog using MCS? We would like to have a single catalog of 5000 machines, but having a single subnet with 5000 machines causes broadcast storms. We would like to avoid having multiple catalogs with the only difference being the network they are assigned to.

  25. Carl,
    Wondering if you can point me in the right direction.
    We are running XD7.8 and using MCS to create new servers in our AWS environment.
    Everything is running nicely, however I want to run a script to install AV software and activate Office after MCS creates the machine.
    What would be the best way to have a script launch once a machine is created by MCS?
    The base image which is being used by MCS is an Amazon Web Service AMI that I created.

    Thank you in advance.

    1. Can you create a group policy with a computer startup script?

      Or you can create a Scheduled Task that runs when then computer starts.

  26. Hi Carl,

    I created a MCS provisioned catalog at that time I used 3 LUN’s so base disk is there on all LUN’s.

    I added 3 more LUN’s but i could not see base disk there on new LUN’s. Do i need to update the machine catalog to get base disk copied along all LUN’s

    Please suggest.

    Thanks in advance!

    1. When you add more machines it should start using the new datastores and copy the snapshot to those datastores. You might have to add 6 machines before it does it. Note, there is no rebalance option. To rebalance, you’ll have to delete the VMs and remake them.

  27. Is there an equivalent to the load balancing policies pf XenApp 6.5 in XD 7.6?

    in 7.6 I don’t see failover policies that XenApp 6.x had where you could specify one worker group of servers as primary and failover to a secondary group.
    I can’t see how you can add 2 machine catalogues to 1 delivery group and place a catalogue in maintenance Рunless you create a single MC and failover individual machines.

  28. Insightful,
    I am creating 2 machine catalogs (1 at DR site other at main site) to be used by a single delivery group, not at the same time.
    How do I load balance (or SWAP) the machine catalogs so I can perform maintenance one at a time without affecting user sessions?
    Thanks

    1. Add both to the Delivery Group. Then put the DR machines in maintenance mode and they won’t be used until you turn off maintenance mode. No need to remove and add machines.

      1. I have XD7.9, there are 2 machine catalogs created(A- 1 Machine, B-1 Machine). I was tried to add both catalogs in delivery group C but appreantly it doesn’t allow do so.
        Can you please tell me whether i have done anything wrong

  29. Carl,

    I’ve found that specifying session reconnect options for the delivery groups to anything but “Always” breaks session sharing. Have you found a way to to keep the session from roaming and keep session sharing?

  30. Hi Carl

    You suggested not using Link-Clone for Persistent VDI, are such machines to be configured as “Remote PC Access” ?

    When installing VDA, there are are only the Options “Create a Master Image” and “Enable Remote PC Access”. Which Option would be appropriate in this case?

    1. It doesn’t matter. The “Master Image” option installs a few more services than the other option. That’s the only difference. I usually install with “Master Image” just in case I want to do that later.

      1. I have some persistent VDI which were earlier created with MCS under Version 7.6.
        I would like to upgrade both the catalog and Delivery Group to 7.7, but the VDA on the VM should be same Version as to access the latest Feature.

        Since creating persistent VDI actually disassociate itself from the Original Base Image and there is no longer possible to update the Machines through “Catalog”, would it be okay to upgrade the VDA directly on the persistent VDI?

        Thanks

        1. Yes. The linked clones are handled by vSphere, not Citrix. Citrix only does the machine identity.

          I would recommend that you clone each persistent VM into a full clone so you don’t have to worry about linked clones anymore.

  31. Hello Carl,

    I have an issue with configuring “Windowed Mode” and “Multiple Instance” on XenApp 7.6 at the same time.

    I will be able to achieve any one at a time. For example, If I edit defalut.ica file as below, it will allow me to launch an application in windowed mode. If this setting is removed, then I will be able to launch multiple instance. Please suggest.

    [APP123]
    TWIMode=Off
    ScreenPercent=99

        1. When you launch multiple instances, are they in the same session? If so, I would expect all of the instances to run in the same window.

          Otherwise, I don’t have any advice. You can try posting to discussions.citrix.com or contact Citrix Support.

          1. When I launch multiple instances, they are in same session, meaning I don’t see multiple sessions on Studio. But it is launched in different window. I will try posting in Citrix. Thanks for your Inputs so far.

  32. HI Carl,

    I wondered what you would think is the best delivery method if I was to build two XenApp images for roughly 100 people. This is a single data centre solution, so no DR required.

    Would MCS be preferred if it is non-persistent? I would imagine PVS would be over kill. Else, if it was persistent, as you stated, something else such as SCCM or hypervisor cloning would be best?

    1. The problem with MCS is that you can no longer just login to your VDAs and make a change. Instead you have to update the master and then push it to the linked clones. For only two identical machines, I would do updates manually on both. However, some people prefer the master image method because it gives them one thing to back up.

  33. Good Afternoon Carl,
    Curious if If you ran accoss if Citrix recommendation to have a separate drive (D:) for the windows OS paging file and the windows spool location. This I believe was the recommendation awhile back for optimum performance (separate disk queues for each drive). Just curious if we should be concerned with that or does Citrix just recommend a C: drive large enough to hold everything? The Provisioning service will be MCS as well.

    1. I don’t see any difference between Citrix VDAs and regular PCs. If you don’t multi-partition your PCs then don’t multi-partition your VDAs. XenApp is nothing more than a PC that lets multiple users login at once.

Leave a Reply