Provisioning Services – Console Config

Last Modified: Jun 17, 2017 @ 11:05 am


Launch the Console

  1. Launch the Provisioning Services Console.
  2. Right-click the top-left node, and click Connect to Farm.
  3. Enter localhost, and click Connect.

Farm Properties

  1. Right-click the farm name, and click Properties.
  2. On the Groups tab, add the Citrix Admins group.
  3. On the Security tab, add the Citrix Administrators group to grant it full permission to the entire Provisioning Services farm. You can also assign permissions in various nodes in the Provisioning Services console.
  4. On the Options tab, check the boxes next to Enable Auditing, and Enable offline database support.
  5. In Provisioning Services 7.9 and newer, notice the new Send anonymous statistics and usage information checkbox, which enables Customer Experience Improvement Program (CEIP).
  6. See for additional places where CEIP is enabled.
  7. In the Problem Report tab, you can enter MyCitrix credentials.
  8. Click OK to close Farm Properties.
  9. Click OK when prompted that a restart of the service is required.

Server Properties

From Citrix Blog Posts Updated Guidance on PVS Ports and Threads and PVS Secrets (Part 3) – Ports & Threads:

Q: What is the optimum number of ports for the Stream Service?

A: The best Stream Process performance is attained when the threads per port is not greater than the number of cores available on the Provisioning Server. For best performance, use the following formula:

# of ports x # of threads/port = max clients

  1. Expand the site and click Servers. For each Provisioning Server, right-click it, and click Configure Bootstrap.
  2. Click Read Servers from Database. This should cause both servers to appear in the list.
  3. On the Options tab, check the box next to Verbose mode.
  4. Right-click the server, and click Properties.
  5. On the General tab, check the box next to Log events to the server’s Windows Event Log.
  6. Click Advanced.
  7. Increase the threads per port. The number of threads per port should match the number of vCPUs assigned to the server.
  8. On the same tab are concurrent I/O limits. Note that these throttle connections to local (drive letter) or remote (UNC path) storage. Setting them to 0 turns off the throttling. Only testing will determine the optimal number. See for more details.
  9. Click OK to close Advanced Server Properties.
  10. On the Network tab, change the Last port to 6968. Note: port 6969 is used by the Provisioning Services two-stage boot component. Click OK when done.
  11. Click Yes if prompted to restart the stream service.
  12. If you get an error message about the stream service then you’ll need to restart it manually.

  13. Repeat for the other servers. You can copy the Server Properties from the first server, and paste them to additional servers.

Create vDisk Stores

To create additional vDisk stores (one per vDisk / Delivery Group), do the following:

  1. On the PvS servers, using Explorer, go to the local disk containing the vDisk folders and create a new folder. The folder name usually matches the vDisk name. Do this on both PvS servers.
  2. In the Provisioning Server Console, right-click Stores, and click Create Store.
  3. Enter the name for the vDisk store, and select an existing site.
  4. Switch to the Servers tab. Check the boxes next to the Provisioning Servers.
  5. On the Paths tab, enter the path for the Delivery Group’s vDisk files. Click Validate.
  6. Click Close and then click OK.
  7. Click Yes when asked for the location of write caches.

Create Device Collections

  1. Expand the site, right-click Device Collections, and click Create Device Collection.
  2. Name the collection in some fashion related to the name of the Delivery Group, and click OK.

If you are migrating from one PvS farm to another, see Kyle Wise How To Migrate PVS Target Devices.

Prevent “No vDisk Found” PXE Message

If PXE is enabled on your PvS servers, and if you PXE boot a machine that is not added as a device in the PvS console, then the machine will pause booting with a “No vDisk Found” message at the BIOS boot screen. Do the following to prevent this.

  1. Enable the Auto-Add feature in the farm Properties on the Options tab.

  2. Create a small dummy vDisk (e.g. 100 MB).

  3. Create a dummy Device Collection.

  4. Create a dummy device.
  5. Set it to boot from Hard Disk
  6. Assign the dummy vDisk and click OK.
  7. Set the dummy device as the Template.

  8. Right-click the site, and click Properties.
  9. On the Options tab, point the Auto-Add feature to Dummy collection, and click OK.

Related Topics

17 thoughts on “Provisioning Services – Console Config”

  1. Hi Carl,

    This is rather a weird question. Is there a way to force or have PVS start to use more RAM? Our consultants specced our 2 PVS VMS out to have 200 GBs of RAM in order to stream about 15 images. With the assumption PVS would use around 10-15 GBs of RAM per image. RIght now with all 15 Images (Steaming 550 VDI Vms) we are only using 15 GBs total. I am not sure if the new version are just more optimized to use less ram or if there is anything we can do to allow PVS to consume more RAM.


    1. Are the vDisks stored locally? Or are they on a remote share?

      I think there’s a perfmon counter indicating cache hits. If the percentage is high, then I guess it doesn’t need to cache anything else in RAM.

  2. Hi Carl

    I have PVS Servers configured. The PVS1 seems to be fine. Unfortunately while installing and configuring PVS2, two IP-Addresses was erroneously configured in SystemNetworkConfiguration. This was only discovered later after PVS configuration and the not needed IP was removed.

    However whenever I create a “Boot ISO” on the PVS2, the removed IP still shows when trying to use the ISO-file. It seems the information is still available somewhere in PVS2 configuration. I have checked through the “registry” but nothing found.

    Do you please have idea where on the PVS-configuration or SystemFile I should look to remove the erroneous IP.


  3. Increase the threads per port. The number of threads per port should match the number of cores in the server (including hyperthreading).

    Do you mean the cores of the provisioning server Virtual Machine or really the physical vSphere host ?

  4. Carl, when going to a server, and clicking Show Connected Devices, do you know of a way to export that data? I was thinking with MCLI-GET, but I don’t see the option.

    1. Maybe MCLI Get DeviceStatus? There should be a serverIpConnection column. You can probably filter the command to only retrieve devices for a specific server.

  5. Carl,
    Can you find the XenDesktop Controller address in the PVS console? if not where will that information be displayed?

    1. PvS only talks to Controllers when you run the XenDesktop Setup Wizard. Or are you asking about the Controllers that the VDAs are registering with? If so, check the ListOfDDCs registry key on the VDA.

      1. If we were going to run XenDesktop Setup Wizard in PvS console and didn’t have the information on the controller host or IP where would the place be to look?

  6. Hi Carl
    Thanks for excellent article. I see that you are using local folder for PVS vDisk. I have a question regarding that
    I am doing design of PVS. I have 3 PVS servers and will be hosting around 5 vdisks for now (standard mode). I have not been given information on if CIFS is available or not.

    Where can I place these vDisks. I can think of two options and would like to know which one is better in your opinion.

    Option 1
    Place vDisks locally on each PVS server. Configure DFS-R in each PVS and let it synchronise the vdisks across PVS servers. This will avoid single point of failure but require extra space to store vDisks in each server.

    Option 2
    Place vDisk on CIFS SMB 3.0 share. All updates gets stored in share to which all PVS are connected to. This could be single point of failure but require less space.

    I also read that by default PVS streams from disk for first target and cache vDisk contents locally in Server RAM and subsequent streams are served from PVS server RAM. Does this happens for both options?


    1. 1. This is the traditional configuration because it provides maximum availability, maximum performance, and less network activity. But yes, extra disk space.
      2. Where I’ve seen SMB shares the performance seems to be lower than local storage. Also, older versions of PvS did not cache SMB vDisks but newer versions should now cache them.

Leave a Reply