Provisioning Services – Console Config

Last Modified: Sep 17, 2016 @ 8:17 am


Launch the Console

  1. Launch the Provisioning Services Console.
  2. Right-click the top-left node and click Connect to Farm.
  3. Enter localhost and click Connect.

Farm Properties

  1. Right-click the farm name and click Properties.
  2. On the Groups tab, add the Citrix Admins group.
  3. On the Security tab, add the Citrix Administrators group to grant it full permission to the entire Provisioning Services farm. You can also assign permissions in various nodes in the Provisioning Services console.
  4. On the Options tab, check the boxes next to Enable Auditing and Enable offline database support.
  5. In Provisioning Services 7.9 and newer, notice the new Send anonymous statistics and usage information checkbox.
  6. In the Problem Report tab, you can enter MyCitrix credentials.
  7. Click OK to close Farm Properties.
  8. Click OK when prompted that a restart of the service is required.

Server Properties

From Citrix Blog Posts Updated Guidance on PVS Ports and Threads and PVS Secrets (Part 3) – Ports & Threads:

Q: What is the optimum number of ports for the Stream Service?

A: The best Stream Process performance is attained when the threads per port is not greater than the number of cores available on the Provisioning Server. For best performance, use the following formula:

# of ports x # of threads/port = max clients

  1. Expand the site and click Servers. For each Provisioning Server, right-click it, and click Configure Bootstrap.
  2. Click Read Servers from Database. This should cause both servers to appear in the list.
  3. On the Options tab, check the box next to Verbose mode.
  4. Right-click the server, and click Properties.
  5. Check the box next to Log events to the server’s Windows Event Log.
  6. Click Advanced.
  7. Increase the threads per port. The number of threads per port should match the number of vCPUs assigned to the server. Click OK.
  8. On the same tab are concurrent I/O limits. Note that these throttle connections to local (drive letter) or remote (UNC path) storage. Setting them to 0 turns off the throttling. Only testing will determine the optimal number. See for more details.
  9. On the Network tab, change the Last port to 6968. Note that port 6969 is used by a different Provisioning Server component. Click OK when done.
  10. Repeat for the other servers.
  11. Click Yes if prompted to restart the stream service.
  12. If you get an error message about the stream service then you’ll need to restart it manually.

Create vDisk Stores

To create additional vDisk stores (one per vDisk / Delivery Group), do the following:

  1. On the PvS servers, using Explorer, go to the local disk containing the vDisk folders and create a new folder. The folder name usually matches the vDisk name. Do this on both PvS servers.
  2. In the Provisioning Server Console, right-click Stores, and click Create Store.
  3. Enter the name for the vDisk store, select an existing site and switch to the Servers tab.
  4. Check the boxes next to the Provisioning Servers.
  5. On the Paths tab, enter the path for the Delivery Group’s vDisk files. Click Validate.
  6. Click OK twice.
  7. Click Yes when asked for the location of write caches.

Create Device Collections

  1. Expand the site, right-click Device Collections, and click Create Device Collection.
  2. Name the collection in some fashion related to the name of the Delivery Group and click OK.

If you are migrating from one PvS farm to another, see Kyle Wise How To Migrate PVS Target Devices.

Prevent “No vDisk Found” PXE Message

If you PXE boot a machine that is not added as a device in the PvS console, then the machine will pause booting with a “No vDisk Found” message at the BIOS boot screen. Do the following to prevent this.

  1. Enable the Auto-Add feature in the farm Properties on the Options tab.

  2. Create a small dummy vDisk (that is 100 MB).
  3. Create a dummy Device Collection.

  4. Create a dummy device and set it to boot from Hard Disk. Assign the dummy vDisk.

  5. Set the dummy device as the Template.

  6. In site properties, on the Options tab, point the Auto-Add feature to Dummy collection.

Related Topics

Email this to someonePrint this pageTweet about this on TwitterShare on LinkedInShare on FacebookPin on PinterestShare on RedditShare on StumbleUpon

11 thoughts on “Provisioning Services – Console Config”

  1. Increase the threads per port. The number of threads per port should match the number of cores in the server (including hyperthreading).

    Do you mean the cores of the provisioning server Virtual Machine or really the physical vSphere host ?

  2. Carl, when going to a server, and clicking Show Connected Devices, do you know of a way to export that data? I was thinking with MCLI-GET, but I don’t see the option.

    1. Maybe MCLI Get DeviceStatus? There should be a serverIpConnection column. You can probably filter the command to only retrieve devices for a specific server.

  3. Carl,
    Can you find the XenDesktop Controller address in the PVS console? if not where will that information be displayed?

    1. PvS only talks to Controllers when you run the XenDesktop Setup Wizard. Or are you asking about the Controllers that the VDAs are registering with? If so, check the ListOfDDCs registry key on the VDA.

      1. If we were going to run XenDesktop Setup Wizard in PvS console and didn’t have the information on the controller host or IP where would the place be to look?

  4. Hi Carl
    Thanks for excellent article. I see that you are using local folder for PVS vDisk. I have a question regarding that
    I am doing design of PVS. I have 3 PVS servers and will be hosting around 5 vdisks for now (standard mode). I have not been given information on if CIFS is available or not.

    Where can I place these vDisks. I can think of two options and would like to know which one is better in your opinion.

    Option 1
    Place vDisks locally on each PVS server. Configure DFS-R in each PVS and let it synchronise the vdisks across PVS servers. This will avoid single point of failure but require extra space to store vDisks in each server.

    Option 2
    Place vDisk on CIFS SMB 3.0 share. All updates gets stored in share to which all PVS are connected to. This could be single point of failure but require less space.

    I also read that by default PVS streams from disk for first target and cache vDisk contents locally in Server RAM and subsequent streams are served from PVS server RAM. Does this happens for both options?


    1. 1. This is the traditional configuration because it provides maximum availability, maximum performance, and less network activity. But yes, extra disk space.
      2. Where I’ve seen SMB shares the performance seems to be lower than local storage. Also, older versions of PvS did not cache SMB vDisks but newer versions should now cache them.

Leave a Reply