Citrix Provisioning Services PowerShell Wrapper

I have just published a new project that I have been working on over at CodePlex.

This project was to create a wrapper for the MCliPsSnapin that is provided by Citrix for the automation and scripting of Provisioning Services. I got fed up of navigating the whole syntax of the snap in, which wasn’t true to the spirit of PowerShell, so I decided instead of complaining I could do something about it.

At this time, all of the Add, Get, Set, SetList and Delete commands have been implemented. Many of the Run commands have also been done, but there are a few still to go.

So, please head over to CodePlex and download if you think it would be useful. Please let me know if you have any issues or feedback on the project.

Be the first to like.

Posted in Citrix, Powershell, Programming | Leave a comment

DHCP Option 119 – DNS Search Suffix – PowerShell Array Builder

Although Microsoft clients may not support DHCP Option 119, it is nonetheless a very important option for Linux and OSX clients. However, configuring such an option is not exactly the friendliest thing in the world.

The RFC 3397 ( defines the standard. It requires that all options be presented as a byte array. One of the main benefits of the RFC though is the ability to use pointers to save duplication in the values that are transferred to the client. This again though makes the process of creating the correct array of values more prone to error.

For example, looking at the following list, it returns two domains, and

0x0C 0x63 0x72 0x61 0x69 0x67 0x2D 0x74 0x6F 0x6C 0x6C 0x65 0x79 0x02 0x63 0x6F 0x02 0x75 0x6B 0x00 0x06 0x66 0x6F 0x6F 0x62 0x61 0x72 0xC0 0x0D

Hardly readable is it?

I needed to do this for half a dozen scopes, each with 3 or 4 suffixes to be presented to the clients. Not wanting to waste a whole day setting up the array values, I spent a little time building a PowerShell function that can take a series of domain suffixes in order, and convert them into a byte array, utilising the pointers as much as possible.

How to Use It

Copy the entire function into a PowerShell window. This will create the function. Next, use it like this:

Make-HexDomainSearchSuffix -Domains,

This will then print out a series of hex values ready for entering into the DHCP console.
However, this is still too much work for me, and still subject to errors creeping in. If your DHCP servers are running Windows 2012, then you have the PowerShell DHCP cmdlets at your disposal, and you can push the output straight into the option like this:

Set-DhcpServerv4OptionValue -ScopeId -OptionId 119 -Value (Make-HexDomainSearchSuffix -Domains,


Setting Up Microsoft Windows DHCP Server to Present Option 119

Microsoft DHCP servers do not make this option available by default. It has to be added as a Predefined Option before you can assign the option to a scope or to the server.
To do this:

  1. Open the DHCP Console and expand the server node.
  2. Right click IPv4 and select ‘Predefined Options and Values
  3. Ensure the Option Class is ‘DHCP Standard Options’
  4. Click Add and enter the values in the image below.DHCP Option 119
  5. Click OK

The option is now available to be added to scopes and servers.

The Script

# ----------------------------------------------------------------------------------------------------------
# PURPOSE:    Creates a byte array for use with DHCP Option 119
# VERSION     DATE         USER                DETAILS
# 1           07/03/2016   Craig Tolley        First version
# 1.1         08/03/2016   Craig Tolley        Fixed issue where if the whole domain matched the pointer was incorrect
# ----------------------------------------------------------------------------------------------------------

function Make-HexDomainSearchSuffix

    # Helper function for converting whole strings to byte arrays.
    function Convert-StringToOpt119Hex

    $Hexed = @()
    $SourceText.Split(".") | ForEach { 
    $Hexed += [String]::Format("0x{0:X2}" -f [int]($_.ToCharArray().Count)); 
    $_.ToCharArray() | ForEach { 
        $Hexed += [String]::Format("0x{00:X2}" -f [int]$_)
    Write-Output $Hexed

    # Build the list of objects that we want to work with. 
    $DomListInt = @()
    ForEach ($Domain in $Domains)
        $D = New-Object Object
        Add-Member -InputObject $D -MemberType NoteProperty -Name "DomainName" -Value $Domain
        Add-Member -InputObject $D -MemberType NoteProperty -Name "LinkedDomainIndex" -Value $null
        Add-Member -InputObject $D -MemberType NoteProperty -Name "LinkedDomainStartIndex" -Value $null
        Add-Member -InputObject $D -MemberType NoteProperty -Name "HexArray" -Value (New-object System.Collections.Arraylist)
        Add-Member -InputObject $D -MemberType NoteProperty -Name "HexLength" -Value 0
        $DomListInt += $D

    # Work out if we can have any links
    ForEach ($Domain in $DomListInt)
    #Write-Output "Current Domain: $($Domain.DomainName)"
    # Ignore the first domain, must be converted in full
    $DIndex = $DomListInt.IndexOf($Domain)
    if ($DIndex -eq 0)
        #Write-Output "First Domain"
        $Domain.HexArray = Convert-StringToOpt119Hex -SourceText $Domain.DomainName -Pointer $Null

    $Matched = $false
    $c = $($Domain.DomainName.Split(".").Count)
    #Write-Output "Parts: $c"
    for ($i = 1; $i -lt $c; $i++)
        $DPart = [String]::Join(".", $Domain.DomainName.Split(".")[$i..$c])
        Write-Host $DPart    
        # If the string can be found in a previous domain, then it can be linked. 
        $PartMatchDomain = ($DomListInt[0..($DIndex-1)] | Where { $_.DomainName -like "*$($DPart)"} | Select -First 1)
        Write-Host $PartMatchDomain
        if ($PartMatchDomain -ne $null)
            Write-Output "Found in $($PartMatchDomain.DomainName)"
            Write-Output "Match Index: $($PartMatchDomain.DomainName.ToString().IndexOf($DPart))"
            $Domain.LinkedDomainIndex = $DomListInt.IndexOf($PartMatchDomain)
            $Domain.LinkedDomainStartIndex = $($PartMatchDomain.DomainName.ToString().IndexOf($DPart))
                Write-Output "Unique Parts: $([String]::Join(".",$Domain.DomainName.Split(".")[0..($i-1)]))"
                $Domain.HexArray += Convert-StringToOpt119Hex -SourceText $([String]::Join(".",$Domain.DomainName.Split(".")[0..($i-1)]))

            $i = $c # Causes the loop to stop
            $Matched = $true

    # If not matched, then the entry needs including in full
    if ($Matched -eq $false)
        $Domain.HexArray = Convert-StringToOpt119Hex -SourceText $Domain.DomainName -Pointer $Null

    # And finally, lets put it all together
    $HexOutput = @()
    ForEach ($Domain in $DomListInt)
    $HexOutput += $Domain.HexArray
    # If no linked domain, then null terminate
    if ($Domain.LinkedDomainIndex -eq $null)
        $HexOutput += "0x00"
        $Domain.HexLength = $Domain.HexArray.Count + 1

    # If linked domain index = 0 then, the start point is simply the start index
    elseif ($Domain.LinkedDomainIndex -eq 0)
        $HexOutput += "0xC0" # Compression Link
        $HexOutput += [String]::Format("0x{0:X2}" -f [int]($Domain.LinkedDomainStartIndex))
        $Domain.HexLength = $Domain.HexArray.Count + 2

    # If linked domain is not 0, then the start index needs to be calculated
        $HexOutput += "0xC0" # Compression Link
        $HexOutput += [String]::Format("0x{0:X2}" -f [int](($DomListInt[0..($Domain.LinkedDomainIndex-1)] | Measure -Sum HexLength).Sum + $Domain.LinkedDomainStartIndex))
        $Domain.HexLength = $Domain.HexArray.Count + 2

    Write-Output $HexOutput

1 person found this post useful.

Posted in Networking, Powershell | 7 Comments

Assigning permissions to a volume through the NetApp PowerShell Toolkit

As part of my work to automate as much as I can, both to reduce time and increase consistency, I was looking for a way to assign permissions to a newly created volume which was providing a CIFS share through our NetApp FAS unit.

Normally I would use the Get/Set-Acl cmdlets that are provided through Windows, however this was not an option as the machine that is running the script to create the volume does not have access to the network on which the volume was going to be accessed. The script only has access to the management lif on the NetApp.

The DataOnTap PowerShell toolkit has a series of commands which can set file and directory permissions, however, the documentation isn’t the most clear, and working out the correct order and requirements is a little challenging. To view a list of all the commands related to setting file and directory security, enter the following into your PS session:

Get-Command *NcFileDirectorySecurity*

There is no equivalent commands (that I could quickly find) built in DataOnTap for 7-mode – and as they don’t exist now, I don’t imagine they will be added.

The Script

This is the excerpt of script that I am running. I have broken down the script and explained each part below – however for those wishing to dive right in, here you go:

$VolName = “MyVolName”
$Vserver = “vs1”

# Create the ACL to apply. 
# ACL has some ACEs created by default, so after creation clear everything.
New-NcFileDirectorySecurityNtfs -SecurityDescriptor $VolName -VserverContext $Vserver
Get-NcFileDirectorySecurityNtfsDacl -SecurityDescriptor $VolName | Remove-NcFileDirectorySecurityNtfsDacl

# Add in the permissions that we want (can be DACL or SACL)
Add-NcFileDirectorySecurityNtfsDacl -Account "DOMAIN\Domain Admins" -AccessType allow -Rights full_control -NtfsSd $VolName -VserverContext $Vserver
Add-NcFileDirectorySecurityNtfsDacl -Account "DOMAIN\MyVolNameUsers" -AccessType allow -Rights modify -NtfsSd $VolName -VserverContext $Vserver

# Create a Policy Task to apply the permissions, and then apply them
Add-NcFileDirectorySecurityPolicyTask -Name $VolName -SecurityType ntfs -VserverContext $Vserver -Path "/$VolName" –NtfsSecurityDescriptor $VolName
Set-NcFileDirectorySecurity -Name $VolName -VserverContext $Vserver

# Sleep required else the policy tries to be removed when it is still in use
Start-Sleep -Seconds 5

# Cleanup the created objects. This does not remove the applied permissions
Remove-NcFileDirectorySecurityPolicy -Name $VolName -VserverContext $Vserver
Remove-NcFileDirectorySecurityNtfs -SecurityDescriptor $VolName -VserverContext $Vserver

Getting Information about Current Permissions

It is important to know what is already applied before making changes. The Get-NcFileDirectorySecurity cmdlet can be used to interrogate the permissions assigned to any object that the NetApp can see. Permissions can be looked at for any object in a NAS volume.

To show the permissions on $VolName that we are working with in this example run this command:

Get-NcFileDirectorySecurity –Path “/$VolName

In this example it assumes that the volume is mounted to a junction path with the same name as the volume itself.


The most important part of this process is knowing the correct order to apply the settings with. Once you know the order, it becomes quite easy.

In the example here, I have already created a new volume, and the name of the volume is stored in the $VolName variable. The SVM/vServer that I am using is stored in $Vserver

Like most of the CDot management, permissions are based on policies that are then applied to objects.

Step 1 – Create a Policy

New-NcFileDirectorySecurityNtfs -SecurityDescriptor $VolName -VserverContext $Vserver

In this, a new policy is created. The SecurityDescripter is actually a name – it does not have to be the volume name. I find that it is easier to call the policy something meaningful in case the script fails and requires cleanup.

To verify that your new policy is created run:

Get-NcFileDirectorySecurityNtfs –SecurityDescriptor $VolName

The SecurityDescriptor parameter can be omitted to get all the policies on the cluster.

Step 2 – Remove the default permissions assigned (optional)

When you look at the new policy that has been created, you should see that a default set of permissions has been created and assigned to the policy. This is a bit like Windows, and you get SYSTEM, CREATOR OWNER, etc. In my case, I was going to be stripping these out, so wanted to remove them. The simple way to do this:

Get-NcFileDirectorySecurityNtfsDacl -SecurityDescriptor $VolName | Remove-NcFileDirectorySecurityNtfsDacl

What this is doing, is pulling all of the Discretionary ACLs from the policy and then removing them.

There is no need to do this for the System ACLs (for auditing) as no SACLs are created by default

Step 3 – Add in your new DACLs and SACLs

This is like adding in permissions to your Windows ACL list. This can be repeated as many times as you want to create the permissions that you want to apply to the object.

Add-NcFileDirectorySecurityNtfsDacl -Account "MINTS\Domain Admins" -AccessType allow -Rights full_control -NtfsSd $VolName -VserverContext $Vserver

Confusingly, they change terminology in this cmdlet. The NtfsSd parameter is the same as SecurityDescriptor in the previous examples.

You can keep using the Get-NcFileDirectorySecurityNtfsDacl cmdlet from step 2 to check your work. The Add/Set/Remove commands all use the same syntax format, so you can modify the rules until you get them as you want.

Note: The Netapp Modify rights is subtly different to Windows. Netapp Modify includes the Delete Subfolders and Files rights, which Windows Modify does not. This shows as the user having a special permission when looking through Windows.

Step 4 – Add a Policy Task

This step links the policy that you have now created to an object. Once again I have used the volume name that I am applying to as the Name of the policy, so it is clear what it applies to. You can choose whatever name you want though.

Add-NcFileDirectorySecurityPolicyTask -Name $VolName -SecurityType ntfs -VserverContext $Vserver -Path "/$VolName" –NtfsSecurityDescriptor $VolName

It is worth adding here that until this point the policy you have created can be applied to any number of objects, and can even be used as a template for future use. In my case it is a one off application. You could also schedule a job to reapply this policy at a given time if that is what your business requirements dictate.

Step 5 – Apply the Policy

Despite having linked the policy to an object, the permissions that you have defined have not yet been applied to your object. This last step is required to actually set the permission.

Set-NcFileDirectorySecurity -Name $VolName -VserverContext $Vserver

Step 6 – Cleanup

Once the policy and task have been used, unless you want to keep them for re-use on other objects or to reapply the permissions on a schedule, you can remove them. Now that the permissions have been applied to the object they no longer require the task or policy to remain.

These lines remove the Task and the Policy.

Remove-NcFileDirectorySecurityPolicy -Name $VolName -VserverContext $Vserver
Remove-NcFileDirectorySecurityNtfs -SecurityDescriptor $VolName -VserverContext $Vserver

If you are scripting this, then you may need to put a pause in here between steps 5 and 6. I found in testing that the removal attempted to happen before the Set above had actually completed properly, causing the Remove job to fail as the object it was removing was still in use by an active task. Unfortunately the Set task did not return a Job Id that could be used to track the status. There is probably a way of finding this, but I did not look into it.

2 people found this post useful.

Posted in NetApp, Powershell | Leave a comment

Using Add-Type in a PowerShell script that is run as a Scheduled Task

I like using objects in PowerShell, they make management and scripting easier as your are dealing with named sets of information and not having to find objects in numbered arrays or use dictionaries.

That means however that a lot of my scripts start off with a block that looks a little like this:

 $UserDefinition = @"
    public class MyCustomUser {
       public System.String UserId;
       public System.String FirstName;
       public System.String LastName;
       public System.String Source;
       public System.Int32 Priority;
 Add-Type -TypeDefinition $UserDefinition -Language CSharp
 $User = New-Object MyCustomUser

Nothing too difficult. I run a lot of scripts interactively, and this never fails (unless I get the syntax wrong inside the code block!)

However, running this exact same block of code as a Scheduled Task, with a specific domain user account failed – and the transcript showed this error:

New-Object : Cannot find type [MyCustomUser]: make sure the assembly containing this type is loaded.
At line:1 char:9
+ $User = New-Object MyCustomUser
+ ~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidType: (:) [New-Object], PSArgumentException
+ FullyQualifiedErrorId : TypeNotFound,Microsoft.PowerShell.Commands.NewObjectCommand

It turns out that when you run the above code, it generates a temporary file inside the user profile of the user that the task was configured to run as. When you run a script as a scheduled task, the task starts without waiting for the user profile to load. This means that when the Add-Type command is run, it fails. Interestingly, the command itself does not generate an error. It is only when the new object definition is attempted to be used that the command generates the error.

This behaviour is documented in a Microsoft KB article here: Although this is not specifically describing this error, the circumstances and effects are the same.

With this information I attempted to specify the path to generate the output from Add-Type as a parameter – as below:

Add-Type -TypeDefinition $UserDefinition -Language CSharp -OutputAssembly C:\MyScript\MyCustomUser.dll -OutputType Library

This makes no difference though, and the type still fails to be generated. The temporary files used to generate the output must still be in the profile location.

The only solution that I have made work is to pre-generate the output, and then use Add-Type pointed at the generated DLL file. So, in an interactive session I ran this:

Add-Type -TypeDefinition $UserDefinition -Language CSharp -OutputAssembly C:\MyScript\MyCustomUser.dll -OutputType Library

And then in the script in the scheduled task we changed the import to run this:

Add-Type -Path C:\MyScript\MyCustomUser.dll

The script now runs every time without error. The only annoyance is that if we ever change the definition of the object that we have to regenerate the DLL file. This would be a bigger issue if you were dynamically building custom types and then importing them at runtime.

Hopefully this might help someone else from wondering why their script breaks when run as a scheduled task.

Be the first to like.

Posted in Powershell, Server 2012 | 4 Comments

Using Connect-MsolService inside an Web Application

I provide a number of useful tools to our 1st and 2nd line teams through a web application – mainly because it is flexible, easy to update and centralised.

We are launching Office 365, as a side-by-side service. Not all of our users are entitled to it (through various reasons) so checking whether or not a user has been assigned a license or service quickly is important.

I made a simple script that I could put behind a web form. It took the username as input, connected to the Azure directory and then returned a pretty output of the licenses and the service plans that the user was assigned. Testing locally in the PowerShell ISE worked great.

However, after putting it up in the web application it started faililng. Connect-MsolService would fail every time to establish a connection.

This is because the Connect-MsolService cmdlet uses the local user profile to cache information when it connects. If the profile is not available (which is the default setting when creating a web application) then the cmdlet fails to make a connection.

The solution is simple though. In the properties of the Application Pool that is hosting your application, change the setting of ‘Load User Profile’ to ‘true’.

To change this setting:

  1. Open the IIS Console, and select Application Pools from the pane on the left
  2. Select the application pool that is assigned to your web application (if you are unsure you can right click the pool and select View Applications)
  3. Select Advanced Settings
  4. Change the value of ‘Load User Profile’ to true and click Ok.
    IIS Application Pool Load User Profile setting
  5. Select Recycle on the Action pane to recycle the application pool.

Now your web application should be able to use the Connect-MsolService cmdlet.

Be the first to like.

Posted in Powershell | Leave a comment

WSUS: Copy Updates Between Groups

So, I am restructuring some WSUS groups to make them easier to report on, but I already had a large number of approvals on one group that I wanted to retain.

There are already a few interpretations of this on the web, but none that in my opinion are as slick as this, or provide quite the same level of functionality. Another PowerShell script, should be useful for someone.

Run the script below, then call it using the following syntax:

Copy-WsusGroupApprovals -WsusServerFqdn -SourceGroupName "OldServers" -TargetGroupName "NewServers"

You can optionally specify a port, the default being 8530. You can also specify to use a secure connection. The group names are both case sensitive though.

# ----------------------------------------------------------------------------------------------------------
# PURPOSE:    WSUS - Copy Approvals from one Group to another Group
# VERSION     DATE         USER                DETAILS
# 1           21/01/2016   Craig Tolley        First Version
# ----------------------------------------------------------------------------------------------------------

# Copies all approvals from the specified source group to the specified destination group. 
# Group names are case sensitive. 
# Unless specified the default WSUS port of 8530 will be used to connect. 
function Copy-WsusGroupApprovals


    [Int]$WsusServerPort = 8530,

    [Boolean]$WsusServerSecureConnect = $false,



    # Load the assembly required
        Write-Error "Unable to load the Microsoft.UpdateServices.Administration assembly: $($_.Exception.Message)"


    # Attempt the connection to the WSUS Server
        $WsusServer = [Microsoft.UpdateServices.Administration.AdminProxy]::getUpdateServer($WsusServerFqdn, $WsusServerSecureConnect, $WsusServerPort)
        Write-Error "Unable to connect to the WSUS Server: $($_.Exception.Message)"

    # Get all of the Wsus Groups, and check that the specified source and destination groups exist
    $Groups = $WsusServer.GetComputerTargetGroups()
    If ($Groups.Name -notcontains $SourceGroupName -or $Groups.Name -notcontains $TargetGroupName)
        Write-Error "Source or Destination group names cannot be found in the list of groups on the WSUS Server. Group names are case sensitive. Please check your names."
    $SourceGroupObj = $Groups | Where {$_.Name -eq $SourceGroupName}
    $TargetGroupObj = $Groups | Where {$_.Name -eq $TargetGroupName}

    # Get all of the updates on the server
    Write-Progress -Activity "Getting Details of all updates"
    $Updates = $WsusServer.GetUpdates()
    # Go through each of the updates. If the update has an approval for the source group, then create an approval for the destination group. 
    $i = 0
    $Approved = 0
    ForEach ($Update in $Updates)
        $i ++
        Write-Progress -Activity "Copying update approvals" -PercentComplete (($i/$($Updates.Count))*100) -Status "$i of $($Updates.Count)"
        if ($Update.GetUpdateApprovals($SourceGroupObj).Count -ne 0 -and $Update.GetUpdateApprovals($TargetGroupObj).Count -eq 0)
            Write-Host ("Approving {0} for {1}" -f $Update.Title, $TargetGroupObj.Name)
            $Update.Approve('Install',$TargetGroupObj) | Out-Null
            $Approved ++
    Write-Progress -Activity "Copying update approvals" -Completed

   Write-Output ("Approved {0} updates for target group {1}" -f $Approved, $TargetGroupName)

Be the first to like.

Posted in Powershell | 3 Comments

Customising the NetScaler 11 User Interface – Adding Extra Content

I am finally getting round to playing with NetScaler 11, and working on a side-by-side migration from our working 10.5 installation. The configuration of all of the services, servers, monitors etc has all been pretty smooth sailing. However, the customisation of the user interface has been somewhat challenging.

With one hand, Citrix have given us a simple GUI method of applying customisations to the various vservers. They have also allowed customisations to be applied to individual vservers – a blessing as it allows simple customisation of different vservers without having to put in complex responders/rewrite rules.

Another advantage is the abstraction of the configuration (Nitro) UI from the public user interface. A few times when setting up our 10.5 installation I got something wrong and ended up accidentally breaking the admin UI. With the new mode, the admin UI is separate.

On the other hand, Citrix has taken away the immense flexibility that we had before. They may not have liked it, but you could customise any of the configuration files – index, scripts, css, and really go to town with your customisation. We appear to now be limited to specifying a handful of images and some CSS options. Not even a way of specifying a ‘Help and Support’ URL or anything potentially useful for a user.

There is a solution though! I have been working on a way which adds in two new sections to the login page. These sections pull in information from two HTML files that are a part of the customisation. It may not be perfect – and does involve modifying a couple of files outside of the customisation folder. However, the flexibility offered by this solution is fairly wide.

Below is a simple example of what can be done. The text at the top is part of the header that I have added and the text with hyper-links at the bottom is part of the footer file that I have added.

There are four main steps to achieving this:

  1. Modify the script file which generates the login page to add in two new <div> sections
  2. Modify the rc.netscaler file to copy this updated script file to the correct location every time that the Netscaler boots
  3. Create a header.html and/or footer.html file in the customisation folder
  4. Make it look pretty through the use of the custom.css file in the customisation folder

Making the page pretty is what takes the most work. The rest of the work should take you around 15 minutes.

1. Modifying the Script

Using WinSCP or a similar tool, download a copy of this file from the Netscaler: /var/netscaler/gui/vpn/js/gateway_login_view.js

You can make a backup of the original file in the same folder. Files in this folder are not removed or updated when the Netscaler is rebooted.

Open the file up, and modify the following lines that are highlighted in bold:

               //start header code
        var header_row1= $("<tr></tr>").attr("id","row1").append($("<td></td>").attr("class","header_left"));
        var header_row2 = $("<tr></tr>").attr("id","row2").append($("<td></td>").attr({"colspan":"2","class":"navbar"}));
        var header_table = $("<table></table>").attr("class","full_width").append(header_row1,header_row2);
        var logonbelt_topshadow= $("<div></div>").attr('id','logonbelt-topshadow');
        //end header code
        //generic logonbox markup:can be used on majority gateway pages
        var authentication = $("<div></div>").attr('id','authentication');

        var logonbox_container = $("<div></div>").attr('id','logonbox-container');
        var logonbelt_bottomshadow = $("<div></div>").attr('id','logonbelt-bottomshadow');

        var logonbox_innerbox = $("<div></div>").attr('id','logonbox-innerbox');

        // Add in a Header DIV if the header.html file can be found
        var headerfile = new XMLHttpRequest();'GET', "../logon/themes/Default/header.html", false);
        while (headerfile.readyState !=4) {sleep(10);};
        var logonbox_header = "";
        if (headerfile.status == 200) { logonbox_header = "<div id=logonbox-header>" + headerfile.responseText + "</div>" };

        var logonbox_logoarea = $("<div></div>").attr('id','logonbox-logoarea');
        var logonbox_logonform = $("<div></div>").attr({'id':'logonbox-logonform','class':'clearfix'});

        // Add in a Footer DIV if the footer.html file can be found
        var footerfile = new XMLHttpRequest();'GET', "../logon/themes/Default/footer.html", false);
        while (footerfile.readyState !=4) {sleep(10);};
        var logonbox_footer = "";
        if (footerfile.status == 200) { logonbox_footer = "<div id=logonbox-footer>" + footerfile.responseText + "</div>" };

        //logonbox_innerbox.append(logonbox_logoarea,logonbox_logonform); // Original Line
        logonbox_innerbox.append(logonbox_header,logonbox_logoarea,logonbox_logonform,logonbox_footer); // Modified line adding in the extra DIV

What these changes do is tell the logon page to look for a header and footer html file int he current theme directory, and if it finds them add the content into the display of the web page.

Leave the rest of the file as it is. Copy the file back to the same location on the Netscaler. Put a copy of the script in the live location by running the following command from the Netscaler shell:

cp /var/netscaler/gui/vpn/js/gateway_login_view.js /netscaler/ns_gui/vpn/js/ gateway_login_view.js

2. Modify the rc.netscaler file to copy this file at every boot

By default the files in the /netscaler folder get re-set every time that the NetScaler boots. The rc.netscaler file is used to perform actions every time that the system is booted – and so we can use this to copy the script to the correct location every time. From the shell prompt run the following command

echo cp /var/netscaler/gui/vpn/js/gateway_login_view.js /netscaler/ns_gui/vpn/js/ gateway_login_view.js >> /nsconfig/rc.netscaler

3. Create a header.html and/or footer.html file inside the customisation folder

Each of these files should contain raw html, without any headers or body tags. An example of the code in the files above is below:


<div style="float: left">
	<h1>Citrix Remote Access</h1>
<div style="float: right; height: inherit; width: 300px; background-size: cover; background-position:center center; background-repeat: no-repeat; background-image: url(../logon/themes/Default/custom_media/logo.png);"></div>


<table style="width:100%">
		<td align="center">
			<a class="plain input_labels form_text" href="" target="_new">For further information on how to access and use this service, please click here.</a>
		<td align="center">
			<a class="plain input_labels form_text" href="" id="DownloadLinksFont" target="_new">Download The Latest Citrix Receiver For Your Client</a>

Save the files with their respective names in the following location: /netscaler/logon/themes/<THEMENAME>/

If the files are not called the correct name, then the div will not be displayed on the login page.The name should be all lowercase with a .html extension. The pages inherit the CSS that is already applied to the pages, so applying further style settings inside these html files can be counter-productive. There are enough style files already applying to the login page – this could be too much or the simple solution to making the header and footer sections do exactly what you want.

If you want to reference other content, such as images, from within these files, then the paths that you enter need to be relative to the /vpn folder. Netscaler, through some magic, always presents the current theme in the same location though, and the path to the root of your theme is “../logon/themes/Default/. As an example, if you wanted to add an image to your header file and the image is saved in the same location as the header file, you could do so with the following:

<img src=”../logon/themes/Default/header_image.jpg” alt=”Header Image” />

The same path and logic applies for linked in script files or any other content. My only recommendation would be to keep the code that you create as lightweight as you possibly can. You do not want to be increasing the logon times for the page more than you need to.

4. Customise your login page

Using the custom.css file, you can now customise the entire page, including the display of the header and footer <div> tags that are included in the login page.
I have to be honest, that getting to a reasonably pretty page may take some time. I am not a web developer, so I may not have been approaching this in the best way. I ended up using the Firefox developer tools to make changes to the live style sheet until I worked out exactly what settings and values I wanted. I then put my changes into the custom.css file.

You can make changes to the css file direct on the Netscaler, but you then have to be aware of caching that is taking place in browsers and on the Netscaler that mean that your changes may not be reflected instantly on the site.

In case it helps someone else out, I have included the changes that I made to the CSS file in order to get the result above. This may serve as a baseline to help you achieve your desired result.

.header {
 width: 100%;

/* This is the actual auth box in the centre, contains the header, form, and footer */
#logonbox-innerbox {
    background : #FFFFFF;
    display: block;
    border-radius: 15px;
    border-style: none;
    padding: 0px;

/* The new header div that we added. Curve the top corners and apply a background colour */
#logonbox-header {
    background-color: #422E5D;
    border-radius: 15px 15px 0px 0px;
    height: 100px;
    padding: 15px;

/* The new footer div */
#logonbox-footer {
    padding: 0px 10px 20px 10px;

/* The header we put inside our header div */
#logonbox-header h1 {
    font-family: citrixsans-light;
    font-weight: unset;
    font-size: 40px;
    color: #FFFFFF;

/* Actual logon form */
#logonbox-logonform {
    width: 80%;
    margin: auto;
    padding: 30px 47px 20px 20px;

/* I needed to make the titles of the form fields larger. Set these 2 */
#logonbox-logonform .plain.input_labels {
    width: 200px;
#logonbox-logonform .field .left {
    width: 200px;

#authentication {
    width: 900px; /* Set the overall width of the authentication dialog */
    margin: 0px auto; /* Make the auth box sit in the centre of the page */


That’s it. You should now have the ability to add much more content to your login pages, and customise that content on a theme by theme basis. Be patient when testing, as I found various caches kept sending me back the old versions of css and content files.

I get a blank page after making the changes

This occurred for me when I had incorrectly formatted HTML in either of my files. All HTML should be properly terminated.

Important Note about NetScaler Updates
When you upgrade the software on your Netscaler, the script file that you edited will be replaced. You will need to make the changes in step 1 again. Any files that you include as part of the theme file are retained though.

2 people found this post useful.

Posted in Citrix | 8 Comments

Maximum Number of PowerShell Parameter Sets in Function

I have been working on a module which includes a function that has many options for how it can be executed. Through the flexibility of Parameter Sets I have been able to define in detail all of the available options and use the built in validation to minimise the amount of variable checking that I need to do in the main code block of the function.

However, I appear to have hit a limitation with regards to the number of distinct Parameter Sets that you can define. When I added my 33rd parameter set, the sets stopped being evaluated properly, and the Get-Help function –ShowWindow command showed some duplicate sets and only ever 32 combinations.

When I only have 32 parameter sets, everything works as it should, any more seems to break the functionality. This is using PowerShell 3.0.

I have not been able to find any documentation on the web to either confirm or deny this limitation.

Be the first to like.

Posted in Powershell | Leave a comment

Citrix StoreFront ‘Group View’ – An Alternative to Folder View

We have a StoreFront set up, in which User Subscriptions are disabled and so every application is subscribed to a user. Our applications are defined in folders in the XenDesktop site for ease of finding the applications. The folders were added after we discovered that everything was just chucked together in one big group in the StoreFront – which made finding applications for users difficult. This gave us a StoreFront that looked something like this when a user logged in

StoreFront Folder View

Feedback from our users was that, although this layout was better, it would be even better if all applications were grouped but available from the home page. So, with a little JavaScript tweaking, we ended up with this:

StoreFront 'Group View'Each group is shown as a level, and subfolders were shown as further nested levels.

Below are the details of how we did it. I must stress that this is not a supported Citrix change, but certainly worked for us.

All changes are made on the StoreFront server.

  1. Take a backup of the file C:\inetpub\wwwroot\Citrix\StoreName\scripts\Default.htm.script.min.js
  2. Open the file, and use to convert it into something more readable and paste the results back into the file.
  3. Search for the following function. In my file it was on line 7366

_generateItemsMarkup: function() {
    var b = this;
    var d = "";
    var c = b._getBranch(b.options.currentPath);
    for (var e in c.folders) {
        d += b._generateFolderHtml(e, c.folders[e])
    a.each(c.apps, function(f, g) {
        d += b._generateAppHtml(g)
    return d
  1. Replace it with this:

_generateItemsMarkup: function() {
    var b = this;
    var d = "";
    var c = b._getBranch(b.options.currentPath);
    a.each(c.apps, function(f, g) {
    d += b._generateAppHtml(g)
    for (var e in c.folders) {
        d += b._listFolders(b.options.currentPath + '/' + e)
    return d

_listFolders: function(y) {
    var b = this;
    var d = "";
    d += '<div id="app-directory-path"><div><ul>'
    d += b._generateBreadcrumbMarkup(y.substring(5).split("/")); 
    d += '</ul></div></div>'
    var x = b._getBranch(y);
    d += '<div id="myapps-container">'
    a.each(x.apps, function(f, g) { d += b._generateAppHtml(g) });
    for (var f in x.folders) { d += b._listFolders(y + '/' + f) }
    d += '</div>'
    return d
  1. Save the updated file and copy to all StoreFront servers in the deployment.

No reboot necessary. Change will take effect the next time that the StoreFront is refreshed from a client.

Be the first to like.

Posted in Citrix | Leave a comment

Setting Up Kerberos NFS on NetApp Data OnTap 8.3 Cluster Mode

I have just been through the headaches of getting this set up and working, so I thought I would share a few notes and tips that I have come across on my way.

I am not saying that this is a complete set up guide, or that it contains every step that is needed to make the solution work. It is probably far from it. However I do hope that is points someone else in the right direction.

It is worthwhile gaining an understanding of Kerberos and how it actually works. There are a couple of guides on Kerberos on the web. I found this guide helped explain the process for me: There are plenty of others though.

There is a recent NetApp TR that covers this setup, and if you read it very carefully, then it does contain all of the information that you should need to get this working. The problem with the TR is that it is very detailed and covers a wide range of setups. My advice is to print the document, and read it at least twice highlighting all of the parts that you believe are relevant to your setup. TR-4073 can be found here:

If you are coming at this having previously set up Kerberos on a DoT 8.2 or older system then you will notice that a lot of the Kerberos commands have moved, and I think nearly everything now resides within the nfs Kerberos context from the command line.

My Setup

  • Windows 2012 R2 domain controllers, running in Windows 2008 domain functional level
  • NetApp DataOnTap 8.3 Cluster Mode
  • Ubuntu 12.04 and Ubuntu 14.04 clients, which is already bound to the AD domain and can be logged on to using domain credentials
  • All devices on the same subnet, with no firewalls in place

The guide here, which uses AES 128 for the encryption mode requires DoT 8.3. Support for AES128 and 256 encryption was added in this version. If you are using an older version then you will need to use either DES or 3DES encryption, which will require modification of your domain controller and is not covered at all below.

I have not managed to get AES256 to work. Although all of the items in the key exchange supported it, the NetApp never managed to see the supplied Kerberos tickets as valid. As I was aiming for any improvement over DES, I was happy to settle for AES 128 and did not continue to spend time investigating the issues with AES256. If anyone happens to get it to work and would like to send me a pointer on what I have missed then it would be much appreciated.

So, on to the details:

  1. Setting Up the Domain Controller

No changes had to be made to the Windows DC. This is only because we were using AES encryption which Windows DCs have enabled by default. In this case the DC is also the authoritative DNS server for the domain with both forward and reverse lookup zones configured.

  1. Define a Kerberos Realm on the SVM

In 8.3, this can be completed in the nfs Kerberos realm context at the command line. Quite a bit of repetition in the definition of the server IP address here.

cluster::> nfs kerberos realm create -realm TEST.DOMAIN.CO.UK -vserver svm-nas -kdc-vendor Microsoft -kdc-ip -adserver-name -adserver-ip -adminserver-ip -passwordserver-ip

Verify that the realm is created

cluster::> nfs kerberos realm show

Kerberos                 Active Directory KDC       KDC
Vserver Realm                    Server           Vendor     IP Address
-------- ------------------------ ---------------- ---------- -----------------
  1. Bind the SVM interface to the Kerberos realm

Now we need to bind this SVM interface to the Kerberos realm. This will create an object in Active Directory for NFS. This object will contain the Service Prinicipal Names for the SVM.

cluster::*> nfs kerberos interface enable -vserver svm-nas -lif svm-nas-data -spn >nfs/

Once the command is run, open up Active Directory Users and Computers, look in the Computers container and check that a new computer object has been created. There should be an object with the name NFS-SVM-NAS.

You can also verify that the object has been created with the correct SPNs by querying the domain for the SPNs that are listed against an object. Run the following command from an elevated command prompt:

Setspn.exe –L NFS-SVM-NAS

The command should return output similar to this.

C:\>setspn -L NFS-SVM-NAS
Registered ServicePrincipalNames for CN=NFS-SVM-NAS,CN=Computers,DC=test,DC=domain,DC=co,DC=uk:
  1. Restrict the accepted Encryption Types to just use AES on the SVM

If you are not making any changes to the Windows Domain Controller, then DES and 3DES encryption will not be supported by the domain controller. For tidiness I prefer to disable these options on the SVM so that nothing can even try to use them. Any clients that do would get an Access Denied error when trying to mount.

cluster::> nfs server modify -vserver * -permitted-enc-types aes-128, aes-256

This command will modify all SVM on the cluster, or you could specify the SVM that you wanted to modify if you wanted.

  1. Setting up Kerberos – Unix Name Mapping

This setup will attempt to authenticate the machine using the machine SPN. This means that there needs to be a name-mapping to accept that connection and turn it into a username that is valid for authentication purposes for a volume. By the time that the name mapping kicks in, the authentication process has been completed. The name-mapping pattern uses regular expressions, which are always fun!

The name mapping rule should be as specific as you possibly can. This could be just your realm, it could be part of the FQDN and the realm.

In my case, I have multiple FQDN’s for clients, so the match I set up was based on matching the realm only.

cluster::*> vserver name-mapping create -vserver svm-nas -direction krb-unix -position 1 -pattern (.+)@TEST.DOMAIN.CO.UK -replacement nfs

The name mapping is applied per SVM. To see all of the mappings run:

cluster::*> vserver name-mapping show
  1. Setting up the NFS User account

A user needs to be created which corresponds with the name mapping rule that you have defined in the previous step. If no user is defined, then the mapping will work but access will still be denied. To create a user:

cluster::>vserver services name-service unix-user create -vserver svm-nas -user nfs -id 500 -primary-gid 0
  1. Verify that Forward and Reverse DNS Lookups are working

This is important to get right. Kerberos requires that all clients can successfully forward and reverse lookup the IP address. Check that using your DNS server you can perform a nslookup of the registered name of the SVM that you specified in step 3. Ping is not sufficient as it can cache the results and may not actually query the DNS Server.

All clients will also need to have fully resolvable DNS entries. Verify that everything is being registered correctly and can be resolved. If there are any errors then they will need to be corrected before continuing as mounts will fail.

  1. Check the configuration of the accepted and default ticket types in the Kerberos configuration on the client.

The clients need to know that they can use the AES128 encryption method, and also that this method takes a higher priority that other suites, such as ArcFour or DES. Check the entries that are listed in the /etc/krb5.conf file. The settings that I found to work for me have been included below. An important note is that with DoT 8.3, there is no longer a requirement to enable the Allow Weak Encryption option. AES is considered a strong encryption method.

    default_realm = TEST.DOMAIN.CO.UK
    ticket_lifetime = 7d
    default_tgs_enctypes = aes128-cts-hmac-sha1-96 arcfour-hmac-md5 aes256-cts-hmac-sha1-96 des-cbc-crc des-cbc-md5 des3-hmac-sha1
    default_tkt_enctypes = aes128-cts-hmac-sha1-96 arcfour-hmac-md5 aes256-cts-hmac-sha1-96 des-cbc-crc des-cbc-md5 des3-hmac-sha1
    permitted_enctypes = aes128-cts-hmac-sha1-96 arcfour-hmac-md5 aes256-cts-hmac-sha1-96 des-cbc-crc des-cbc-md5 des3-hmac-sha1
    dns_lookup_realm = true
    dns_lookup_kdc = true
    dns_fallback = true
    allow_weak_crypto = false

You will notice that AES128-CTS-HMAC-SHA1-96 has been brought to the front of the list. I did originally have the order as AES256/AES128/ArcFour, however this did not work. Dropping AES256 down the list enabled everything to work. I did not drop the AES256 entirely as other services are using Kerberos and are successfully using this encryption method.

After making changes to this file, you will need to restart the gssd service using the command

sudo service gssd restart
  1. Done!

At this point, with a heap of luck, you should be able to issue a mount command with the sec=krb5 option specified and have it work successfully.

If it hasn’t worked, then see the troubleshooting information below.


One of the biggest things that annoys me with articles such as this, is when you get to the end, they say it should work, and it doesn’t. You are left with a configuration that you have no idea if it is right, and no idea on how to fix. So here are a few places to look for information to solve any problems that you may hit.

This section is not exhaustive. There are probably many other tools that you could use to check out what is happening, but this is what I used to get me to the process above.

If it is not working, then there is plenty of information that you can obtain and filter through in order to determine the problem. When you have the information I often found that the problem could be identified reasonably easily.

When I hit an error, I tended to run all of these logs and then look through all of them.

  1. Netapp Filer SecD Trace

The secd module on the filer is responsible for the authentication and the name lookup. This information is useful when the filer is rejecting the credentials or if the spn is not able to be mapped to a valid user.

You first have to turn on the logging, then run your command, then turn it off.

cluster::> set diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
cluster::*> secd trace set -trace-all yes -node clusternode1

Run your mount command here

cluster::*> secd trace set -trace-all no -node clusternode1
cluster::*> event log show –source secd

If this logged an error, then the NetApp was involved in the process. These messages tended to be fairly clear and useful.

  1. Run mount with verbose mode turned on

On your Ubuntu machine, you can run the mount command in verbose mode to see what is happening.

sudo mount svm-nas:/nfs_volume /mnt/nfs_volume –o sec=krb5 –vvvv
  1. Run the RPC GSSD daemon in the foreground with verbose logging.

This is the client side daemon responsible for handling Kerberos requests. Getting the verbose output from this can show you what is being requested and whether it is valid or not. You will have to stop the gssd service first, and remember to restart the service when you are finished. You will have to run this in another terminal session as it is a blocking foreground process.

sudo service gssd stop
sudo rpc.gssd –vvvvf

Use Ctrl+C to break when finished.

sudo service gssd start
  1. Capture a tcp dump from the client side.

This allows you to look at the process from a network perspective and see what is actually being transmitted. It was through a network trace that I was able to see that the ordering of my encryption types was wrong.

sudo tcpdump –i eth0 –w /home/username/krb5tcpdump.trc

Again, this is a blocking foreground process so will need to be run in another terminal session. When you are finished the trace can be opened up in Wireshark. Specify a filter in Wireshark of the following to see only requests for your client

kerberos && ip.addr ==

Substitute the IP address for the address of your client.

When looking at the Kerberos packets, it is important to drill down, check that the sname fields, etype and any encryption settings are what you are expecting them to be. Encryption types in requests are listed in the order that they will be tried. If the first one succeeds against the AD, but is not accepted by the Netapp, then you will get access denied.

  1. Testing Name Mapping on the Netapp Cluster

A number of the errors that I was getting were related to problems with name resolution on the Netapp. These were shown clearly by using the secd trace in section a). You can test name mapping without going through the whole process of mounting directly from the Netapp.

Use the following command substituting in the SPN of the client that you want to test.

cluster::> set diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
cluster::*> secd name-mapping show -node clusternode1 -vserver svm-nas- -direction krb-unix -name maps to nfs


I doubt this post is exhaustive in covering this setup, but hopefully it is a pointer in the right direction and includes some useful information on troubleshooting.

If you have any suggestions on items that could be added to the troubleshooting, or information that you think is missing from the guide, please let me know and I can update.

Reference Materials

TR-4073 Secure Unified Authentication for NFS –

TR-4067 Clustered Data ONTAP NFS Best Practice and Implementation Guide –

Requirements for configuring Kerberos with NFS –

rpc.gssd(8) – Linux man page –

krb5.conf –

Encryption Type Selection in Kerberos Exchanges –

Kerberos NFSv4 How To –

4 people found this post useful.

Posted in NetApp | Leave a comment
  • Tags

  • Categories

  • My LinkedIn Profile

    To see my LinkedIn profile, click here:

    Craig Tolley
  • February 2017
    M T W T F S S
    « May    
  • Meta

  • Top Liked Posts

    Powered by WP Likes

Swedish Greys - a WordPress theme from Nordic Themepark.