WSUS: Copy Updates Between Groups

So, I am restructuring some WSUS groups to make them easier to report on, but I already had a large number of approvals on one group that I wanted to retain.

There are already a few interpretations of this on the web, but none that in my opinion are as slick as this, or provide quite the same level of functionality. Another PowerShell script, should be useful for someone.

Run the script below, then call it using the following syntax:

Copy-WsusGroupApprovals -WsusServerFqdn -SourceGroupName "OldServers" -TargetGroupName "NewServers"

You can optionally specify a port, the default being 8530. You can also specify to use a secure connection. The group names are both case sensitive though.

# ----------------------------------------------------------------------------------------------------------
# PURPOSE:    WSUS - Copy Approvals from one Group to another Group
# VERSION     DATE         USER                DETAILS
# 1           21/01/2016   Craig Tolley        First Version
# ----------------------------------------------------------------------------------------------------------

# Copies all approvals from the specified source group to the specified destination group. 
# Group names are case sensitive. 
# Unless specified the default WSUS port of 8530 will be used to connect. 
function Copy-WsusGroupApprovals


    [Int]$WsusServerPort = 8530,

    [Boolean]$WsusServerSecureConnect = $false,



    # Load the assembly required
        Write-Error "Unable to load the Microsoft.UpdateServices.Administration assembly: $($_.Exception.Message)"


    # Attempt the connection to the WSUS Server
        $WsusServer = [Microsoft.UpdateServices.Administration.AdminProxy]::getUpdateServer($WsusServerFqdn, $WsusServerSecureConnect, $WsusServerPort)
        Write-Error "Unable to connect to the WSUS Server: $($_.Exception.Message)"

    # Get all of the Wsus Groups, and check that the specified source and destination groups exist
    $Groups = $WsusServer.GetComputerTargetGroups()
    If ($Groups.Name -notcontains $SourceGroupName -or $Groups.Name -notcontains $TargetGroupName)
        Write-Error "Source or Destination group names cannot be found in the list of groups on the WSUS Server. Group names are case sensitive. Please check your names."
    $SourceGroupObj = $Groups | Where {$_.Name -eq $SourceGroupName}
    $TargetGroupObj = $Groups | Where {$_.Name -eq $TargetGroupName}

    # Get all of the updates on the server
    Write-Progress -Activity "Getting Details of all updates"
    $Updates = $WsusServer.GetUpdates()
    # Go through each of the updates. If the update has an approval for the source group, then create an approval for the destination group. 
    $i = 0
    $Approved = 0
    ForEach ($Update in $Updates)
        $i ++
        Write-Progress -Activity "Copying update approvals" -PercentComplete (($i/$($Updates.Count))*100) -Status "$i of $($Updates.Count)"
        if ($Update.GetUpdateApprovals($SourceGroupObj).Count -ne 0 -and $Update.GetUpdateApprovals($TargetGroupObj).Count -eq 0)
            Write-Host ("Approving {0} for {1}" -f $Update.Title, $TargetGroupObj.Name)
            $Update.Approve('Install',$TargetGroupObj) | Out-Null
            $Approved ++
    Write-Progress -Activity "Copying update approvals" -Completed

   Write-Output ("Approved {0} updates for target group {1}" -f $Approved, $TargetGroupName)

Be the first to like.
Posted in Powershell | Leave a comment

Customising the NetScaler 11 User Interface – Adding Extra Content

I am finally getting round to playing with NetScaler 11, and working on a side-by-side migration from our working 10.5 installation. The configuration of all of the services, servers, monitors etc has all been pretty smooth sailing. However, the customisation of the user interface has been somewhat challenging.

With one hand, Citrix have given us a simple GUI method of applying customisations to the various vservers. They have also allowed customisations to be applied to individual vservers – a blessing as it allows simple customisation of different vservers without having to put in complex responders/rewrite rules.

Another advantage is the abstraction of the configuration (Nitro) UI from the public user interface. A few times when setting up our 10.5 installation I got something wrong and ended up accidentally breaking the admin UI. With the new mode, the admin UI is separate.

On the other hand, Citrix has taken away the immense flexibility that we had before. They may not have liked it, but you could customise any of the configuration files – index, scripts, css, and really go to town with your customisation. We appear to now be limited to specifying a handful of images and some CSS options. Not even a way of specifying a ‘Help and Support’ URL or anything potentially useful for a user.

There is a solution though! I have been working on a way which adds in two new sections to the login page. These sections pull in information from two HTML files that are a part of the customisation. It may not be perfect – and does involve modifying a couple of files outside of the customisation folder. However, the flexibility offered by this solution is fairly wide.

Below is a simple example of what can be done. The text at the top is part of the header that I have added and the text with hyper-links at the bottom is part of the footer file that I have added.

There are four main steps to achieving this:

  1. Modify the script file which generates the login page to add in two new <div> sections
  2. Modify the rc.netscaler file to copy this updated script file to the correct location every time that the Netscaler boots
  3. Create a header.html and/or footer.html file in the customisation folder
  4. Make it look pretty through the use of the custom.css file in the customisation folder

Making the page pretty is what takes the most work. The rest of the work should take you around 15 minutes.

1. Modifying the Script

Using WinSCP or a similar tool, download a copy of this file from the Netscaler: /var/netscaler/gui/vpn/js/gateway_login_view.js

You can make a backup of the original file in the same folder. Files in this folder are not removed or updated when the Netscaler is rebooted.

Open the file up, and modify the following lines that are highlighted in bold:

               //start header code
        var header_row1= $("<tr></tr>").attr("id","row1").append($("<td></td>").attr("class","header_left"));
        var header_row2 = $("<tr></tr>").attr("id","row2").append($("<td></td>").attr({"colspan":"2","class":"navbar"}));
        var header_table = $("<table></table>").attr("class","full_width").append(header_row1,header_row2);
        var logonbelt_topshadow= $("<div></div>").attr('id','logonbelt-topshadow');
        //end header code
        //generic logonbox markup:can be used on majority gateway pages
        var authentication = $("<div></div>").attr('id','authentication');

        var logonbox_container = $("<div></div>").attr('id','logonbox-container');
        var logonbelt_bottomshadow = $("<div></div>").attr('id','logonbelt-bottomshadow');

        var logonbox_innerbox = $("<div></div>").attr('id','logonbox-innerbox');

        // Add in a Header DIV if the header.html file can be found
        var headerfile = new XMLHttpRequest();'GET', "../logon/themes/Default/header.html", false);
        while (headerfile.readyState !=4) {sleep(10);};
        var logonbox_header = "";
        if (headerfile.status == 200) { logonbox_header = "<div id=logonbox-header>" + headerfile.responseText + "</div>" };

        var logonbox_logoarea = $("<div></div>").attr('id','logonbox-logoarea');
        var logonbox_logonform = $("<div></div>").attr({'id':'logonbox-logonform','class':'clearfix'});

        // Add in a Footer DIV if the footer.html file can be found
        var footerfile = new XMLHttpRequest();'GET', "../logon/themes/Default/footer.html", false);
        while (footerfile.readyState !=4) {sleep(10);};
        var logonbox_footer = "";
        if (footerfile.status == 200) { logonbox_footer = "<div id=logonbox-footer>" + footerfile.responseText + "</div>" };

        //logonbox_innerbox.append(logonbox_logoarea,logonbox_logonform); // Original Line
        logonbox_innerbox.append(logonbox_header,logonbox_logoarea,logonbox_logonform,logonbox_footer); // Modified line adding in the extra DIV

What these changes do is tell the logon page to look for a header and footer html file int he current theme directory, and if it finds them add the content into the display of the web page.

Leave the rest of the file as it is. Copy the file back to the same location on the Netscaler. Put a copy of the script in the live location by running the following command from the Netscaler shell:

cp /var/netscaler/gui/vpn/js/gateway_login_view.js /netscaler/ns_gui/vpn/js/ gateway_login_view.js

2. Modify the rc.netscaler file to copy this file at every boot

By default the files in the /netscaler folder get re-set every time that the NetScaler boots. The rc.netscaler file is used to perform actions every time that the system is booted – and so we can use this to copy the script to the correct location every time. From the shell prompt run the following command

echo cp /var/netscaler/gui/vpn/js/gateway_login_view.js /netscaler/ns_gui/vpn/js/ gateway_login_view.js >> /nsconfig/rc.netscaler

3. Create a header.html and/or footer.html file inside the customisation folder

Each of these files should contain raw html, without any headers or body tags. An example of the code in the files above is below:


<div style="float: left">
	<h1>Citrix Remote Access</h1>
<div style="float: right; height: inherit; width: 300px; background-size: cover; background-position:center center; background-repeat: no-repeat; background-image: url(../logon/themes/Default/custom_media/logo.png);"></div>


<table style="width:100%">
		<td align="center">
			<a class="plain input_labels form_text" href="" target="_new">For further information on how to access and use this service, please click here.</a>
		<td align="center">
			<a class="plain input_labels form_text" href="" id="DownloadLinksFont" target="_new">Download The Latest Citrix Receiver For Your Client</a>

Save the files with their respective names in the following location: /netscaler/logon/themes/<THEMENAME>/

If the files are not called the correct name, then the div will not be displayed on the login page.The name should be all lowercase with a .html extension. The pages inherit the CSS that is already applied to the pages, so applying further style settings inside these html files can be counter-productive. There are enough style files already applying to the login page – this could be too much or the simple solution to making the header and footer sections do exactly what you want.

If you want to reference other content, such as images, from within these files, then the paths that you enter need to be relative to the /vpn folder. Netscaler, through some magic, always presents the current theme in the same location though, and the path to the root of your theme is “../logon/themes/Default/. As an example, if you wanted to add an image to your header file and the image is saved in the same location as the header file, you could do so with the following:

<img src=”../logon/themes/Default/header_image.jpg” alt=”Header Image” />

The same path and logic applies for linked in script files or any other content. My only recommendation would be to keep the code that you create as lightweight as you possibly can. You do not want to be increasing the logon times for the page more than you need to.

4. Customise your login page

Using the custom.css file, you can now customise the entire page, including the display of the header and footer <div> tags that are included in the login page.
I have to be honest, that getting to a reasonably pretty page may take some time. I am not a web developer, so I may not have been approaching this in the best way. I ended up using the Firefox developer tools to make changes to the live style sheet until I worked out exactly what settings and values I wanted. I then put my changes into the custom.css file.

You can make changes to the css file direct on the Netscaler, but you then have to be aware of caching that is taking place in browsers and on the Netscaler that mean that your changes may not be reflected instantly on the site.

In case it helps someone else out, I have included the changes that I made to the CSS file in order to get the result above. This may serve as a baseline to help you achieve your desired result.

.header {
 width: 100%;

/* This is the actual auth box in the centre, contains the header, form, and footer */
#logonbox-innerbox {
    background : #FFFFFF;
    display: block;
    border-radius: 15px;
    border-style: none;
    padding: 0px;

/* The new header div that we added. Curve the top corners and apply a background colour */
#logonbox-header {
    background-color: #422E5D;
    border-radius: 15px 15px 0px 0px;
    height: 100px;
    padding: 15px;

/* The new footer div */
#logonbox-footer {
    padding: 0px 10px 20px 10px;

/* The header we put inside our header div */
#logonbox-header h1 {
    font-family: citrixsans-light;
    font-weight: unset;
    font-size: 40px;
    color: #FFFFFF;

/* Actual logon form */
#logonbox-logonform {
    width: 80%;
    margin: auto;
    padding: 30px 47px 20px 20px;

/* I needed to make the titles of the form fields larger. Set these 2 */
#logonbox-logonform .plain.input_labels {
    width: 200px;
#logonbox-logonform .field .left {
    width: 200px;

#authentication {
    width: 900px; /* Set the overall width of the authentication dialog */
    margin: 0px auto; /* Make the auth box sit in the centre of the page */


That’s it. You should now have the ability to add much more content to your login pages, and customise that content on a theme by theme basis. Be patient when testing, as I found various caches kept sending me back the old versions of css and content files.

I get a blank page after making the changes

This occurred for me when I had incorrectly formatted HTML in either of my files. All HTML should be properly terminated.

Important Note about NetScaler Updates
When you upgrade the software on your Netscaler, the script file that you edited will be replaced. You will need to make the changes in step 1 again. Any files that you include as part of the theme file are retained though.

Be the first to like.
Posted in Citrix | 1 Comment

Maximum Number of PowerShell Parameter Sets in Function

I have been working on a module which includes a function that has many options for how it can be executed. Through the flexibility of Parameter Sets I have been able to define in detail all of the available options and use the built in validation to minimise the amount of variable checking that I need to do in the main code block of the function.

However, I appear to have hit a limitation with regards to the number of distinct Parameter Sets that you can define. When I added my 33rd parameter set, the sets stopped being evaluated properly, and the Get-Help function –ShowWindow command showed some duplicate sets and only ever 32 combinations.

When I only have 32 parameter sets, everything works as it should, any more seems to break the functionality. This is using PowerShell 3.0.

I have not been able to find any documentation on the web to either confirm or deny this limitation.

Be the first to like.
Posted in Powershell | Leave a comment

Citrix StoreFront ‘Group View’ – An Alternative to Folder View

We have a StoreFront set up, in which User Subscriptions are disabled and so every application is subscribed to a user. Our applications are defined in folders in the XenDesktop site for ease of finding the applications. The folders were added after we discovered that everything was just chucked together in one big group in the StoreFront – which made finding applications for users difficult. This gave us a StoreFront that looked something like this when a user logged in

StoreFront Folder View

Feedback from our users was that, although this layout was better, it would be even better if all applications were grouped but available from the home page. So, with a little JavaScript tweaking, we ended up with this:

StoreFront 'Group View'Each group is shown as a level, and subfolders were shown as further nested levels.

Below are the details of how we did it. I must stress that this is not a supported Citrix change, but certainly worked for us.

All changes are made on the StoreFront server.

  1. Take a backup of the file C:\inetpub\wwwroot\Citrix\StoreName\scripts\Default.htm.script.min.js
  2. Open the file, and use to convert it into something more readable and paste the results back into the file.
  3. Search for the following function. In my file it was on line 7366

_generateItemsMarkup: function() {
    var b = this;
    var d = "";
    var c = b._getBranch(b.options.currentPath);
    for (var e in c.folders) {
        d += b._generateFolderHtml(e, c.folders[e])
    a.each(c.apps, function(f, g) {
        d += b._generateAppHtml(g)
    return d
  1. Replace it with this:

_generateItemsMarkup: function() {
    var b = this;
    var d = "";
    var c = b._getBranch(b.options.currentPath);
    a.each(c.apps, function(f, g) {
    d += b._generateAppHtml(g)
    for (var e in c.folders) {
        d += b._listFolders(b.options.currentPath + '/' + e)
    return d

_listFolders: function(y) {
    var b = this;
    var d = "";
    d += '<div id="app-directory-path"><div><ul>'
    d += b._generateBreadcrumbMarkup(y.substring(5).split("/")); 
    d += '</ul></div></div>'
    var x = b._getBranch(y);
    d += '<div id="myapps-container">'
    a.each(x.apps, function(f, g) { d += b._generateAppHtml(g) });
    for (var f in x.folders) { d += b._listFolders(y + '/' + f) }
    d += '</div>'
    return d
  1. Save the updated file and copy to all StoreFront servers in the deployment.

No reboot necessary. Change will take effect the next time that the StoreFront is refreshed from a client.

Be the first to like.
Posted in Citrix | Leave a comment

Setting Up Kerberos NFS on NetApp Data OnTap 8.3 Cluster Mode

I have just been through the headaches of getting this set up and working, so I thought I would share a few notes and tips that I have come across on my way.

I am not saying that this is a complete set up guide, or that it contains every step that is needed to make the solution work. It is probably far from it. However I do hope that is points someone else in the right direction.

It is worthwhile gaining an understanding of Kerberos and how it actually works. There are a couple of guides on Kerberos on the web. I found this guide helped explain the process for me: There are plenty of others though.

There is a recent NetApp TR that covers this setup, and if you read it very carefully, then it does contain all of the information that you should need to get this working. The problem with the TR is that it is very detailed and covers a wide range of setups. My advice is to print the document, and read it at least twice highlighting all of the parts that you believe are relevant to your setup. TR-4073 can be found here:

If you are coming at this having previously set up Kerberos on a DoT 8.2 or older system then you will notice that a lot of the Kerberos commands have moved, and I think nearly everything now resides within the nfs Kerberos context from the command line.

My Setup

  • Windows 2012 R2 domain controllers, running in Windows 2008 domain functional level
  • NetApp DataOnTap 8.3 Cluster Mode
  • Ubuntu 12.04 and Ubuntu 14.04 clients, which is already bound to the AD domain and can be logged on to using domain credentials
  • All devices on the same subnet, with no firewalls in place

The guide here, which uses AES 128 for the encryption mode requires DoT 8.3. Support for AES128 and 256 encryption was added in this version. If you are using an older version then you will need to use either DES or 3DES encryption, which will require modification of your domain controller and is not covered at all below.

I have not managed to get AES256 to work. Although all of the items in the key exchange supported it, the NetApp never managed to see the supplied Kerberos tickets as valid. As I was aiming for any improvement over DES, I was happy to settle for AES 128 and did not continue to spend time investigating the issues with AES256. If anyone happens to get it to work and would like to send me a pointer on what I have missed then it would be much appreciated.

So, on to the details:

  1. Setting Up the Domain Controller

No changes had to be made to the Windows DC. This is only because we were using AES encryption which Windows DCs have enabled by default. In this case the DC is also the authoritative DNS server for the domain with both forward and reverse lookup zones configured.

  1. Define a Kerberos Realm on the SVM

In 8.3, this can be completed in the nfs Kerberos realm context at the command line. Quite a bit of repetition in the definition of the server IP address here.

cluster::> nfs kerberos realm create -realm TEST.DOMAIN.CO.UK -vserver svm-nas -kdc-vendor Microsoft -kdc-ip -adserver-name -adserver-ip -adminserver-ip -passwordserver-ip

Verify that the realm is created

cluster::> nfs kerberos realm show

Kerberos                 Active Directory KDC       KDC
Vserver Realm                    Server           Vendor     IP Address
-------- ------------------------ ---------------- ---------- -----------------
  1. Bind the SVM interface to the Kerberos realm

Now we need to bind this SVM interface to the Kerberos realm. This will create an object in Active Directory for NFS. This object will contain the Service Prinicipal Names for the SVM.

cluster::*> nfs kerberos interface enable -vserver svm-nas -lif svm-nas-data -spn >nfs/

Once the command is run, open up Active Directory Users and Computers, look in the Computers container and check that a new computer object has been created. There should be an object with the name NFS-SVM-NAS.

You can also verify that the object has been created with the correct SPNs by querying the domain for the SPNs that are listed against an object. Run the following command from an elevated command prompt:

Setspn.exe –L NFS-SVM-NAS

The command should return output similar to this.

C:\>setspn -L NFS-SVM-NAS
Registered ServicePrincipalNames for CN=NFS-SVM-NAS,CN=Computers,DC=test,DC=domain,DC=co,DC=uk:
  1. Restrict the accepted Encryption Types to just use AES on the SVM

If you are not making any changes to the Windows Domain Controller, then DES and 3DES encryption will not be supported by the domain controller. For tidiness I prefer to disable these options on the SVM so that nothing can even try to use them. Any clients that do would get an Access Denied error when trying to mount.

cluster::> nfs server modify -vserver * -permitted-enc-types aes-128, aes-256

This command will modify all SVM on the cluster, or you could specify the SVM that you wanted to modify if you wanted.

  1. Setting up Kerberos – Unix Name Mapping

This setup will attempt to authenticate the machine using the machine SPN. This means that there needs to be a name-mapping to accept that connection and turn it into a username that is valid for authentication purposes for a volume. By the time that the name mapping kicks in, the authentication process has been completed. The name-mapping pattern uses regular expressions, which are always fun!

The name mapping rule should be as specific as you possibly can. This could be just your realm, it could be part of the FQDN and the realm.

In my case, I have multiple FQDN’s for clients, so the match I set up was based on matching the realm only.

cluster::*> vserver name-mapping create -vserver svm-nas -direction krb-unix -position 1 -pattern (.+)@TEST.DOMAIN.CO.UK -replacement nfs

The name mapping is applied per SVM. To see all of the mappings run:

cluster::*> vserver name-mapping show
  1. Setting up the NFS User account

A user needs to be created which corresponds with the name mapping rule that you have defined in the previous step. If no user is defined, then the mapping will work but access will still be denied. To create a user:

cluster::>vserver services name-service unix-user create -vserver svm-nas -user nfs -id 500 -primary-gid 0
  1. Verify that Forward and Reverse DNS Lookups are working

This is important to get right. Kerberos requires that all clients can successfully forward and reverse lookup the IP address. Check that using your DNS server you can perform a nslookup of the registered name of the SVM that you specified in step 3. Ping is not sufficient as it can cache the results and may not actually query the DNS Server.

All clients will also need to have fully resolvable DNS entries. Verify that everything is being registered correctly and can be resolved. If there are any errors then they will need to be corrected before continuing as mounts will fail.

  1. Check the configuration of the accepted and default ticket types in the Kerberos configuration on the client.

The clients need to know that they can use the AES128 encryption method, and also that this method takes a higher priority that other suites, such as ArcFour or DES. Check the entries that are listed in the /etc/krb5.conf file. The settings that I found to work for me have been included below. An important note is that with DoT 8.3, there is no longer a requirement to enable the Allow Weak Encryption option. AES is considered a strong encryption method.

    default_realm = TEST.DOMAIN.CO.UK
    ticket_lifetime = 7d
    default_tgs_enctypes = aes128-cts-hmac-sha1-96 arcfour-hmac-md5 aes256-cts-hmac-sha1-96 des-cbc-crc des-cbc-md5 des3-hmac-sha1
    default_tkt_enctypes = aes128-cts-hmac-sha1-96 arcfour-hmac-md5 aes256-cts-hmac-sha1-96 des-cbc-crc des-cbc-md5 des3-hmac-sha1
    permitted_enctypes = aes128-cts-hmac-sha1-96 arcfour-hmac-md5 aes256-cts-hmac-sha1-96 des-cbc-crc des-cbc-md5 des3-hmac-sha1
    dns_lookup_realm = true
    dns_lookup_kdc = true
    dns_fallback = true
    allow_weak_crypto = false

You will notice that AES128-CTS-HMAC-SHA1-96 has been brought to the front of the list. I did originally have the order as AES256/AES128/ArcFour, however this did not work. Dropping AES256 down the list enabled everything to work. I did not drop the AES256 entirely as other services are using Kerberos and are successfully using this encryption method.

After making changes to this file, you will need to restart the gssd service using the command

sudo service gssd restart
  1. Done!

At this point, with a heap of luck, you should be able to issue a mount command with the sec=krb5 option specified and have it work successfully.

If it hasn’t worked, then see the troubleshooting information below.


One of the biggest things that annoys me with articles such as this, is when you get to the end, they say it should work, and it doesn’t. You are left with a configuration that you have no idea if it is right, and no idea on how to fix. So here are a few places to look for information to solve any problems that you may hit.

This section is not exhaustive. There are probably many other tools that you could use to check out what is happening, but this is what I used to get me to the process above.

If it is not working, then there is plenty of information that you can obtain and filter through in order to determine the problem. When you have the information I often found that the problem could be identified reasonably easily.

When I hit an error, I tended to run all of these logs and then look through all of them.

  1. Netapp Filer SecD Trace

The secd module on the filer is responsible for the authentication and the name lookup. This information is useful when the filer is rejecting the credentials or if the spn is not able to be mapped to a valid user.

You first have to turn on the logging, then run your command, then turn it off.

cluster::> set diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
cluster::*> secd trace set -trace-all yes -node clusternode1

Run your mount command here

cluster::*> secd trace set -trace-all no -node clusternode1
cluster::*> event log show –source secd

If this logged an error, then the NetApp was involved in the process. These messages tended to be fairly clear and useful.

  1. Run mount with verbose mode turned on

On your Ubuntu machine, you can run the mount command in verbose mode to see what is happening.

sudo mount svm-nas:/nfs_volume /mnt/nfs_volume –o sec=krb5 –vvvv
  1. Run the RPC GSSD daemon in the foreground with verbose logging.

This is the client side daemon responsible for handling Kerberos requests. Getting the verbose output from this can show you what is being requested and whether it is valid or not. You will have to stop the gssd service first, and remember to restart the service when you are finished. You will have to run this in another terminal session as it is a blocking foreground process.

sudo service gssd stop
sudo rpc.gssd –vvvvf

Use Ctrl+C to break when finished.

sudo service gssd start
  1. Capture a tcp dump from the client side.

This allows you to look at the process from a network perspective and see what is actually being transmitted. It was through a network trace that I was able to see that the ordering of my encryption types was wrong.

sudo tcpdump –i eth0 –w /home/username/krb5tcpdump.trc

Again, this is a blocking foreground process so will need to be run in another terminal session. When you are finished the trace can be opened up in Wireshark. Specify a filter in Wireshark of the following to see only requests for your client

kerberos && ip.addr ==

Substitute the IP address for the address of your client.

When looking at the Kerberos packets, it is important to drill down, check that the sname fields, etype and any encryption settings are what you are expecting them to be. Encryption types in requests are listed in the order that they will be tried. If the first one succeeds against the AD, but is not accepted by the Netapp, then you will get access denied.

  1. Testing Name Mapping on the Netapp Cluster

A number of the errors that I was getting were related to problems with name resolution on the Netapp. These were shown clearly by using the secd trace in section a). You can test name mapping without going through the whole process of mounting directly from the Netapp.

Use the following command substituting in the SPN of the client that you want to test.

cluster::> set diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
cluster::*> secd name-mapping show -node clusternode1 -vserver svm-nas- -direction krb-unix -name maps to nfs


I doubt this post is exhaustive in covering this setup, but hopefully it is a pointer in the right direction and includes some useful information on troubleshooting.

If you have any suggestions on items that could be added to the troubleshooting, or information that you think is missing from the guide, please let me know and I can update.

Reference Materials

TR-4073 Secure Unified Authentication for NFS –

TR-4067 Clustered Data ONTAP NFS Best Practice and Implementation Guide –

Requirements for configuring Kerberos with NFS –

rpc.gssd(8) – Linux man page –

krb5.conf –

Encryption Type Selection in Kerberos Exchanges –

Kerberos NFSv4 How To –

Be the first to like.
Posted in NetApp | Leave a comment

Citrix Director: Cannot Initiate Remote Assistance Session

This is more of a dialog of the process that I went through in troubleshooting an issue. This particular issue I think would be rare, as there are only a few situations where total closure from the Internet is actually required or implemented, but the process itself provides a potentially useful guide on logging and investigation of an issue end to end, from Director to delivery machine.

The error was that when you try and initiate a Remote Assistance Session from Citrix Director you get the following error after about 30 seconds:
Citrix Shadowing Error 1
This coincides with the following event in the Application log of the Director server:
Citrix Shadowing Error 2
And on the machine that you are attempting to shadow:
Citrix Shadowing Error 3


Start by looking in the IIS Logs on the Citrix Director server. These are in c:\inetpub\logs\LogFiles\W3SVC1. Open the most recent log file and search up from the end of the file for the following line:


The line returned should look something like this:

2015-02-26 09:39:25 POST /Director/service.svc/web/ShadowSession - 443 username Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:35.0)+Gecko/20100101+Firefox/35.0 500 0 0 37343

The values which we care about are the 4 space separated values at the end of the line. These 4 values correspond to the following 4 headings:

sc-status sc-substatus sc-win32-status time-taken

The SC Status of 500 means that an internal server error occurred. The last value is the time taken in milliseconds. If the value is above 30000 then the response was not received quickly enough and a timeout occurred. This timeout is the WCF timeout, which is the default of 30 seconds. In this case, the total time taken was 37343. This indicates that there was a timeout in the request.

The next step is to enable Citrix Director Logging. This will log details of all of the various calls that are made to the Desktop Delivery Controllers from the Director server. The Director server does not communicate with the Delivery machines in any way, all requests are processed by the DDC.

To enable Citrix Director Logging on the Director server:

  1. Create a folder called C:\Logs
  2. Assign Modify permissions to the INET_USR account
  3. Open IIS
  4. Browse to Sites -> Default Web Site -> Director
  5. Select Application Settings
  6. Set the following 2 properties:
    1. FileName C:\logs\Director.log
    2. LogToFile 1
  7. Restart IIS

Now that logging is enabled, you can retry the attempt to shadow the session through the Director web interface. Make a note of the rough time that you click on the Shadow button, it will help in verifying that you are looking at the right record in the log files. Once you have replicated the error, you can open the log file that should have been generated.

In the open file, starting from the bottom, search for: ENTRY: ShadowSession. You should be taken to a row that looks similar to this.

02/26/2015 12:03:01.2926 : [t:9, s:5xj2ur0kvvzhahemlqraow30] ENTRY: ShadowSession service called

The first entry inside the square brackets represents a thread number. All actions happen on a thread. In this case the thread number is 9. This information is useful in tracking the various log items as all related entries will have occurred on the same thread number. About 20 or so lines further down the log file you should see the PowerShell equivalent command that the DDC will have executed in order to start the Shadowing request. It should look similar to this:

02/26/2015 12:03:01.6833 : [t:9, s:5xj2ur0kvvzhahemlqraow30] PowerShell equivalent: New-BrokerMachineCommand -Category DirectorPlugin -Synchronous -MachineUid 78 -CommandName GetRAConnectionString -CommandData (New-Object System.Text.ASCIIEncoding).GetBytes('<GetRAConnectionStringPayload xmlns="" xmlns:i=""><SessionId>fc267736-3565-4642-95e1-d7a85d789ce9</SessionId></GetRAConnectionStringPayload>') | foreach {(New-Object System.Text.ASCIIEncoding).GetString($_.CommandResponseData)}

Right this second we are interested in the next log line on this thread, which should tell you whether or not this command was successful. Chances are if you are reading this then it was not! In the issue we describe here the following error was logged:

02/26/2015 12:03:34.6052 : [t:9, s:5xj2ur0kvvzhahemlqraow30] TimeoutException caught: The request channel timed out while waiting for a reply after 00:00:30. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.
02/26/2015 12:03:34.6052 : [t:9, s:5xj2ur0kvvzhahemlqraow30] Connector has faulted. Disposing.

At this point, we know it failed, and we know it timed out. This ties up with the timeout value we observed in the first log file. We also have the PowerShell equivalent command which is being run from the DDC. The next step is to verify that the problem is not with the Director server.

Log on to your DDC, open up an elevated PowerShell prompt. We are going to import the Citrix Snap-Ins, and then run the command above to get an idea of how long it is taking.

Add-PSSnapIn *Citrix*

Then copy in the PowerShell command, prepending the statement with a ‘Measure-Command’. It will look like this:

Measure-Command { New-BrokerMachineCommand -Category DirectorPlugin -Synchronous -MachineUid 78 -CommandName GetRAConnectionString -CommandData (New-Object System.Text.ASCIIEncoding).GetBytes('<GetRAConnectionStringPayload xmlns="" xmlns:i=""><SessionId>fc267736-3565-4642-95e1-d7a85d789ce9</SessionId></GetRAConnectionStringPayload>') | foreach {(New-Object System.Text.ASCIIEncoding).GetString($_.CommandResponseData)}}

P.S Don’t forget the additional trailing } that is needed.

After running the command you should have been told how long the command is taking to be run from the DDC. In my case it was always returning 32 seconds. Anything over 30 will be a timeout. If it is a timeout then you can quite safely say that the Director server is not the issue, as all it is doing is reporting the failure of another component. If you find that the system is not timing out, and the request is working, then you will need to investigate communications between the Director Server and the delivery controllers.

Next up is determining what is going on between the DDC and the Delivery Machine. I guessed there must be some form of communication breakdown from the DDC. I opened a copy of WireShark portable on the DDC. Started a capture, and re-ran the PS command from above. Again I had the 32 second timeout.

To make more sense of the results, I applied a filter limiting it to communications to and from the delivery machine with this filter (ip.dst == || (ip.src ==

What was returned was a number of successful HTTP requests. There was a number of request at the start of the transmission, about 30 seconds of nothing, and then a handful of exchanges at the end. No failures or retransmissions. I removed the filter and scanned through the remaining entries within this 30 second period, and again nothing strange popped out. (Thankfully this was on a development system, so the amount of traffic generated was actually negligible).

Although at this point I could not be certain that the DDC was not the culprit, I felt the problem actually had to be with the delivery machine. There were no failures being logged on the DDC, no dropped packets or retransmissions, nothing out of the ordinary.

I must add that at this point I did enable logging on the DDC, but I quickly turned it off again. The volume of information in the logs is just overwhelming, and I could not find a way to track requests through the logs. Logging back off, I moved on to the delivery machines.

I started on the delivery machines again with the WireShark trace. I wanted to confirm what I had seen on the DDC matched what was happening on the delivery machines. I started a trace, and again ran the PowerShell script above. I could see the same exchange of HTTP communications, again with the 30 second break in the communications.

Removing the filter though, I was able to see on this machine a couple of requests, which had a number of retries. After the 30 seconds was up, these retries stopped. To prove this, I retried the command with the capture enabled another 3 times. The same couple of IP addresses was attempted to be contacted every time for 30 seconds, before the failure message appeared on the Director.

Each of these requests was an attempt to contact the online Windows Certificate revocation list. The DNS resolved successfully, but attempts to connect were being dropped by the firewall protecting the network. As I mentioned earlier, this is a closed network, with no Internet access for the clients that the network contains.

Each time that a request to shadow was received, the attempt to get the certificate revocation list would be made. This process took about 30 seconds, and the remaining 2 seconds is lost in negotiations and connections between the various servers.

The solution in our particular case was to use Group Policy to tell the clients that they could not use Internet communications, as well as the firewall which enforced that. There seems to be an inherent assumption in Windows 7 that it will be able to contact the Internet unless you tell the client explicitly that it cannot.

The setting is

Computer Management\Administrative Templates\System\Internet Communication Management\Restrict Internet Communication

There are a number of other settings in the next folder, but only this one seemed to stop the CRL check that Remote Assistance was performing.

Hopefully this is somewhat useful in the tracing of errors that you may have though, even if this is not the root cause of your issue.

2 people found this post useful.
Posted in Citrix | Leave a comment

Installing Nimble Connection Manager Toolkit Silently

If you want to install the Nimble Connection Manager for Windows silently, you will need to specify a couple of options at the command line:

Setup-NimbleNWT-x64. /S /v/qb- INSTALLDIR=\""C:\Program Files\Nimble Storage\"" NLOGSDIR=\""C:\Program Files\Nimble Storage\Logs\"" /norestart

The important section is the NLOGSDIR. If this option is not specified then you will get a MSIEXEC Error 1606: Could not access network location 0. I chose to specify the INSTALLDIR as well so that I knew exactly where everything was going.

Be the first to like.
Posted in Nimble | Leave a comment

PowerShell: Running processes independently of a PS Session on Remote Machines

PowerShell remoting is a great way of utilising commands and processing power of remote systems all from one console. It is also good at pulling information from remote systems and collating this together. There are plenty of examples of using PSSessions, and the Invoke-Command functions to manipulate remote machines, bring down remote modules to work with locally, etc.

One of the shortcomings that I have come across, is the apparent inability to create a long running job on a remote session.

For example, I have a function that performs some processing, which can take anywhere from 20 minutes to 6 hours – depending on the amount of information that is supplied. This job is self contained – and reports itself by email, so once it is started there is no interaction.

I attempted to create a PSSession, and use Invoke-Command. This started the remote job successfully, however, when I closed my local instance of the shell window, the remote process also stopped.

Using Invoke-Command to start a process on the remote machine, something like the snippet below, exhibited the same result.

$Script = {Start-Process -FilePath C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -ArgumentList "-Command Get-Service"}
Invoke-Command -ComputerName remotepc -ScriptBlock $Script

I tried a number of variations of this, exhausting all of the options relating to both sessions and invoked commands, but nothing I found actually achieved my goal.
Looking outside of these commands, I found that the WMI Classes expose the Win32_Process class, which include a Process.Create method. PowerShell can interact with WMI well, so after some quick testing, I found this method created a new process on the remote machine which did not terminate when my local client disconnected.
I was able to wrap this up into a nice little function that can be re-used. It exposes the computer name, credentials and command options. The example included shows how you can start a new instance of PowerShell on the remote machine which can then run a number of commands. This could be changed to run any number of commands, or, if the script gets too long you could just get PowerShell to run a pre-created script file.

# ----------------------------------------------------------------------------------------------------------
# PURPOSE:    Starts a process on a remote computer that is not bound to the local PowerShell Session
# VERSION     DATE         USER                DETAILS
# 1           17/04/2015   Craig Tolley        First version
# ----------------------------------------------------------------------------------------------------------

<# .Synopsis     Starts a process on the remote computer that is not tied to the PowerShell session that called this command.      Unlike Invoke-Command, the session that creates the process does not need to be maintained.      Any processes should be designed such that they will end themselves, else they will continue running in the background until the targeted machine is restarted.  .EXAMPLE    Start-RemoteProcess -ComputerName remotepc -Command notepad.exe    Starts Notepad on the remote computer called remotepc using the current session credentials .EXAMPLE     Start-RemoteProcess -ComputerName remotepc -Command "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -Command ""Get-Process | Out-File C:\Processes.txt"" " -Credential DOMAIN\Username     Starts Powershell on the remote PC, running the Get-Process command which will write output to C:\Processes.txt using the supplied credentials #>
function Start-RemoteProcess {
        [Parameter(Mandatory=$true, Position =0)]
        [Parameter(Mandatory=$true, Position =1)]

        [Parameter(Position = 2)]
        [System.Management.Automation.CredentialAttribute()]$Credential = [System.Management.Automation.PSCredential]::Empty

    #Test that we can connect to the remote machine
    Write-Host "Testing Connection to $ComputerName"
    If ((Test-Connection $ComputerName -Quiet -Count 1) -eq $false) {
        Write-Error "Failed to ping the remote computer. Please check that the remote machine is available"

    #Create a parameter collection to include the credentials parameter
    $ProcessParameters = @{}
    $ProcessParameters.Add("ComputerName", $ComputerName)
    $ProcessParameters.Add("Class", "Win32_Process")
    $ProcessParameters.Add("Name", "Create")
    $ProcessParameters.Add("ArgumentList", $Command)
    if($Credential) { $ProcessParameters.Add("Credential", $Credential) }     

    #Start the actual remote process
    Write-Host "Starting the remote process."
    Write-Host "Command: $Command" -ForegroundColor Gray

    $RemoteProcess = Invoke-WmiMethod @ProcessParameters

    if ($RemoteProcess.ReturnValue -eq 0) 
        { Write-Host "Successfully launched command on $ComputerName with a process id of $($RemoteProcess.ProcessId)" }
        { Write-Error "Failed to launch command on $ComputerName. The Return Value is $($RemoteProcess.ReturnValue)" }

Once caveat of this approach is the expansion of variables. Every variable will be expanded before it is piped to the WMI command. For straight values, strings, integers, dates, that is all fine. However any objects need to be created as part of the script in the remote session. Remember that the new PowerShell session is just that, new. Everything that you want to use must be defined.

This code can be used to run any process. Generally you will want to ensure that you specify the full path to any executables. Remember that any paths are relative to the remote server, so be careful when you specify them.

Should you use this code to run PowerShell commands or scripts then you will need to keep a check on any punctuation that you use when specifying the command. Quotes will need to be doubled to escape them for example. This requires testing.

Also be aware, that this code will start a process, but there is nothing to stop it. Any process should either be self-terminating, or you will need to have another method of terminating the process. If you start a PowerShell session, they will generally terminate once the commands specified have completed.

2 people found this post useful.
Posted in Powershell | Leave a comment

PowerShell: Using AlphaFS to list files and folder longer than 260 characters and checking access

PowerShell is great. However, it has a couple of limitations – either by design or inheritance that are annoying to say the least. One commonly documented failing, which is inherited from the .NET framework is its inability to access files that have a total path length over 260 characters. Another limitation is the linear nature in which commands are executed.

The first issue here is a major issue, particularly when working with network file systems, roaming profiles or any area where longer path lengths exist. Having Mac or Linux users on your network means that path lengths over 260 characters are more likely, as both of these systems support long path names.

There is a very good library available which can help overcome the 260 character limit. It implements most of the .NET framework functions for accessing files and folders, without the path length limitation. It’s a great addition to any project that accesses files and folders.

I have been working on a project to migrate users who are still using roaming profiles to using folder redirection. Some scripting has been required to automate the process and minimise user interaction. This is being done using PowerShell. One of the components of the script involves finding how many files and folders existed, how big they are, and whether or not we had access to read them.

PowerShell could do this.

Get-ChildItem $path -Recurse -Force

can list all the files and the sizes (Length property). Piping that list to a

Get-Content -Tail 1 -ErrorAction SilentlyContinue -ErrorVariable $ReadErrors | Out-Null

will give you a variable that lists all files that have any errors. All good.

This command is susceptible to the path limit thought. It is also slow. Each item is processed in order, one at a time. Whilst getting just the end of a file is quick, this whole command still takes time. Running against a 200MB user profile, took it over 2 minutes to list all files with sizes into a variable and give me a list of files that have access denied. With over 2TB of user profiles to migrate, that was too long.

With this method out of the window, I looked at using some C# code that I could import. The .NET framework offers a host of solutions to processing this sort of data. I ended up with the function below. It uses the AlphaFS library to get details of the files and directories. This removed the limitation of the path length. Also, as I was using the .NET Framework, I could use File.Open(). This just opens the file without reading it. It still throws an access denied error if it cannot be read, just quicker. This whole process could then be combined into a Parallel For Each loop. Directories and files can be recursed concurrently. The result was a scan of a 200mb profile in around 10 seconds – a much more acceptable time.

The code could be used in a C# project, or in the format below it can be included in a PowerShell script. You will need to download the AlphaFS library and put it in an accessible location so that it can be included in your script.

# Start of File Details Definition
$RecursiveTypeDef = @"
using System;
using System.Collections;
using System.Collections.Generic;
using System.Data;
using System.Threading.Tasks;
using System.Diagnostics;
using System.Linq;

public class FileDetails
    public List GetRecursiveFileFolderList(string RootDirectory)
        m_FileFolderList = new List();
        return m_FileFolderList;

    private List m_FileFolderList = new List();

    private void m_GetFileDetails(string DirectoryName)
        List AllFiles = new List();
        List AllFolders = new List();

        FileInfo FI = new FileInfo();
        FI.FileName = DirectoryName;
        FI.Type = Type.Directory;
        FI.FileSize = 0;
        FI.ReadSuccess = true;
        try {
            AllFiles = Alphaleonis.Win32.Filesystem.Directory.GetFiles(DirectoryName).ToList();
        } catch {
            FI.ReadSuccess = false;
        try {
            AllFolders = Alphaleonis.Win32.Filesystem.Directory.GetDirectories(DirectoryName).ToList();
        } catch {
            FI.ReadSuccess = false;
        lock (m_FileFolderList) {

        Parallel.ForEach(AllFiles, File =>
            FileInfo FileFI = new FileInfo();
            FileFI.FileName = File;
            FileFI.Type = Type.File;
            try {
                FileFI.FileSize = Alphaleonis.Win32.Filesystem.File.GetSize(File);
                FileFI.ReadSuccess = true;
            } catch {
                FileFI.ReadSuccess = false;
            lock (m_FileFolderList) {

        Parallel.ForEach(AllFolders, Folder => { m_GetFileDetails(Folder); });

    public struct FileInfo
        public long FileSize;
        public string FileName;
        public Type Type;
        public bool ReadSuccess;

    public enum Type

#Update the following lines to point to your AlphaFS.dll file.
Add-Type -Path $PSScriptRoot\AlphaFS.dll
Add-Type -TypeDefinition $RecursiveTypeDef -ReferencedAssemblies "$PSScriptRoot\AlphaFS.dll", System.Data

# End of File Details Definition

# Use of the function: 
$FileInfo = New-Object FileDetails
$Info = $FileInfo.GetRecursiveFileFolderList("C:\Windows")
$Info | Format-Table -Autosize -Wrap

This will output a full file and directory list of the C:\Windows directory. The property ReadSuccess is true if the file could be opened for reading.

Plenty of scope to modify this to meet your needs if they are something different, but an example of how you can bring in the power of the .NET Framework into PowerShell to help really boost some of your scripts.

1 person found this post useful.
Posted in C#, Powershell, Programming | 2 Comments

‘You Have Been Logged On With a Temporary Profile’ when all profiles have been redirected to a specific location

This is a very strange issue, which I think will only affect a handful of people, and only those who have the right mix of configurations as described below.

Users logging on to a Windows 7 machine received the following popup:

Temporary ProfileThis message implied that there would be some informative details in the Event Log, unfortunately in this situation, nothing. No errors, no warnings, no information.

On this particular machine we were using the following GPO setting to force users to a specific roaming profile location. The machines are all sat inside a controlled network so access to the normal profile was not allowed.

Computer Configuration –> Administrative Templates –> System –> User Profiles –> Set roaming profile path for all users logging onto this computer

In the ProfileList key in the registry you can see the location that has been configured for the Central Profile (i.e the server copy of the roaming profile). Checking out the key for the specific user showed the following. The value can be found at: HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\SID.

UserProfileRegKeyThe GPO was only configured with \\server\profiles$\%username% though. The addition of the Domain component into the path was unexpected.

After clearing all the profiles from the local machine, and rebooting, thinking that something must be corrupt, the issued recurred. Running a ProcMon against the system at boot time and tracking the change to this key showed the user profile service creating the CentralProfile value and populating it with the wrong value from the start.

This machine is quite heavily managed, and this involves running a couple of PowerShell scripts as scheduled tasks at startup. We had configured the tasks to run as local only, as they did not require any access to network resources. They were configured as below:

User Profile - Scheduled Task For some reason, even though this task was set to run locally, it was influencing the location of the roaming profile. Most strangely, it wasn’t just influencing the path of the profile for the account that was configured in the scheduled task, it was influencing all user accounts that logged on to the machine.

The fix for us was fortunately very simple. The job that the task was doing could quite easily be achieved by using the local SYSTEM account. After changing the task credentials, I did have to clear out all of the profiles from the system to remove the incorrect values, but since this change, the accounts have all loaded the correct profiles from the correct locations.

Be the first to like.
Posted in Windows 7 | Leave a comment
  • Tags

  • Categories

  • My LinkedIn Profile

    To see my LinkedIn profile, click here:

    Craig Tolley
  • February 2016
    M T W T F S S
    « Jan    
  • Meta

  • Top Liked Posts

    Powered by WP Likes

Swedish Greys - a WordPress theme from Nordic Themepark.