Maximum Number of PowerShell Parameter Sets in Function

I have been working on a module which includes a function that has many options for how it can be executed. Through the flexibility of Parameter Sets I have been able to define in detail all of the available options and use the built in validation to minimise the amount of variable checking that I need to do in the main code block of the function.

However, I appear to have hit a limitation with regards to the number of distinct Parameter Sets that you can define. When I added my 33rd parameter set, the sets stopped being evaluated properly, and the Get-Help function –ShowWindow command showed some duplicate sets and only ever 32 combinations.

When I only have 32 parameter sets, everything works as it should, any more seems to break the functionality. This is using PowerShell 3.0.

I have not been able to find any documentation on the web to either confirm or deny this limitation.

Be the first to like.
Posted in Powershell | Leave a comment

Citrix StoreFront ‘Group View’ – An Alternative to Folder View

We have a StoreFront set up, in which User Subscriptions are disabled and so every application is subscribed to a user. Our applications are defined in folders in the XenDesktop site for ease of finding the applications. The folders were added after we discovered that everything was just chucked together in one big group in the StoreFront – which made finding applications for users difficult. This gave us a StoreFront that looked something like this when a user logged in

StoreFront Folder View

Feedback from our users was that, although this layout was better, it would be even better if all applications were grouped but available from the home page. So, with a little JavaScript tweaking, we ended up with this:

StoreFront 'Group View'Each group is shown as a level, and subfolders were shown as further nested levels.

Below are the details of how we did it. I must stress that this is not a supported Citrix change, but certainly worked for us.

All changes are made on the StoreFront server.

  1. Take a backup of the file C:\inetpub\wwwroot\Citrix\StoreName\scripts\Default.htm.script.min.js
  2. Open the file, and use to convert it into something more readable and paste the results back into the file.
  3. Search for the following function. In my file it was on line 7366

_generateItemsMarkup: function() {
    var b = this;
    var d = "";
    var c = b._getBranch(b.options.currentPath);
    for (var e in c.folders) {
        d += b._generateFolderHtml(e, c.folders[e])
    a.each(c.apps, function(f, g) {
        d += b._generateAppHtml(g)
    return d
  1. Replace it with this:

_generateItemsMarkup: function() {
    var b = this;
    var d = "";
    var c = b._getBranch(b.options.currentPath);
    a.each(c.apps, function(f, g) {
    d += b._generateAppHtml(g)
    for (var e in c.folders) {
        d += b._listFolders(b.options.currentPath + '/' + e)
    return d

_listFolders: function(y) {
    var b = this;
    var d = "";
    d += '<div id="app-directory-path"><div><ul>'
    d += b._generateBreadcrumbMarkup(y.substring(5).split("/")); 
    d += '</ul></div></div>'
    var x = b._getBranch(y);
    d += '<div id="myapps-container">'
    a.each(x.apps, function(f, g) { d += b._generateAppHtml(g) });
    for (var f in x.folders) { d += b._listFolders(y + '/' + f) }
    d += '</div>'
    return d
  1. Save the updated file and copy to all StoreFront servers in the deployment.

No reboot necessary. Change will take effect the next time that the StoreFront is refreshed from a client.

Be the first to like.
Posted in Citrix | Leave a comment

Setting Up Kerberos NFS on NetApp Data OnTap 8.3 Cluster Mode

I have just been through the headaches of getting this set up and working, so I thought I would share a few notes and tips that I have come across on my way.

I am not saying that this is a complete set up guide, or that it contains every step that is needed to make the solution work. It is probably far from it. However I do hope that is points someone else in the right direction.

It is worthwhile gaining an understanding of Kerberos and how it actually works. There are a couple of guides on Kerberos on the web. I found this guide helped explain the process for me: There are plenty of others though.

There is a recent NetApp TR that covers this setup, and if you read it very carefully, then it does contain all of the information that you should need to get this working. The problem with the TR is that it is very detailed and covers a wide range of setups. My advice is to print the document, and read it at least twice highlighting all of the parts that you believe are relevant to your setup. TR-4073 can be found here:

If you are coming at this having previously set up Kerberos on a DoT 8.2 or older system then you will notice that a lot of the Kerberos commands have moved, and I think nearly everything now resides within the nfs Kerberos context from the command line.

My Setup

  • Windows 2012 R2 domain controllers, running in Windows 2008 domain functional level
  • NetApp DataOnTap 8.3 Cluster Mode
  • Ubuntu 12.04 and Ubuntu 14.04 clients, which is already bound to the AD domain and can be logged on to using domain credentials
  • All devices on the same subnet, with no firewalls in place

The guide here, which uses AES 128 for the encryption mode requires DoT 8.3. Support for AES128 and 256 encryption was added in this version. If you are using an older version then you will need to use either DES or 3DES encryption, which will require modification of your domain controller and is not covered at all below.

I have not managed to get AES256 to work. Although all of the items in the key exchange supported it, the NetApp never managed to see the supplied Kerberos tickets as valid. As I was aiming for any improvement over DES, I was happy to settle for AES 128 and did not continue to spend time investigating the issues with AES256. If anyone happens to get it to work and would like to send me a pointer on what I have missed then it would be much appreciated.

So, on to the details:

  1. Setting Up the Domain Controller

No changes had to be made to the Windows DC. This is only because we were using AES encryption which Windows DCs have enabled by default. In this case the DC is also the authoritative DNS server for the domain with both forward and reverse lookup zones configured.

  1. Define a Kerberos Realm on the SVM

In 8.3, this can be completed in the nfs Kerberos realm context at the command line. Quite a bit of repetition in the definition of the server IP address here.

cluster::> nfs kerberos realm create -realm TEST.DOMAIN.CO.UK -vserver svm-nas -kdc-vendor Microsoft -kdc-ip -adserver-name -adserver-ip -adminserver-ip -passwordserver-ip

Verify that the realm is created

cluster::> nfs kerberos realm show

Kerberos                 Active Directory KDC       KDC
Vserver Realm                    Server           Vendor     IP Address
-------- ------------------------ ---------------- ---------- -----------------
  1. Bind the SVM interface to the Kerberos realm

Now we need to bind this SVM interface to the Kerberos realm. This will create an object in Active Directory for NFS. This object will contain the Service Prinicipal Names for the SVM.

cluster::*> nfs kerberos interface enable -vserver svm-nas -lif svm-nas-data -spn >nfs/

Once the command is run, open up Active Directory Users and Computers, look in the Computers container and check that a new computer object has been created. There should be an object with the name NFS-SVM-NAS.

You can also verify that the object has been created with the correct SPNs by querying the domain for the SPNs that are listed against an object. Run the following command from an elevated command prompt:

Setspn.exe –L NFS-SVM-NAS

The command should return output similar to this.

C:\>setspn -L NFS-SVM-NAS
Registered ServicePrincipalNames for CN=NFS-SVM-NAS,CN=Computers,DC=test,DC=domain,DC=co,DC=uk:
  1. Restrict the accepted Encryption Types to just use AES on the SVM

If you are not making any changes to the Windows Domain Controller, then DES and 3DES encryption will not be supported by the domain controller. For tidiness I prefer to disable these options on the SVM so that nothing can even try to use them. Any clients that do would get an Access Denied error when trying to mount.

cluster::> nfs server modify -vserver * -permitted-enc-types aes-128, aes-256

This command will modify all SVM on the cluster, or you could specify the SVM that you wanted to modify if you wanted.

  1. Setting up Kerberos – Unix Name Mapping

This setup will attempt to authenticate the machine using the machine SPN. This means that there needs to be a name-mapping to accept that connection and turn it into a username that is valid for authentication purposes for a volume. By the time that the name mapping kicks in, the authentication process has been completed. The name-mapping pattern uses regular expressions, which are always fun!

The name mapping rule should be as specific as you possibly can. This could be just your realm, it could be part of the FQDN and the realm.

In my case, I have multiple FQDN’s for clients, so the match I set up was based on matching the realm only.

cluster::*> vserver name-mapping create -vserver svm-nas -direction krb-unix -position 1 -pattern (.+)@TEST.DOMAIN.CO.UK -replacement nfs

The name mapping is applied per SVM. To see all of the mappings run:

cluster::*> vserver name-mapping show
  1. Setting up the NFS User account

A user needs to be created which corresponds with the name mapping rule that you have defined in the previous step. If no user is defined, then the mapping will work but access will still be denied. To create a user:

cluster::>vserver services name-service unix-user create -vserver svm-nas -user nfs -id 500 -primary-gid 0
  1. Verify that Forward and Reverse DNS Lookups are working

This is important to get right. Kerberos requires that all clients can successfully forward and reverse lookup the IP address. Check that using your DNS server you can perform a nslookup of the registered name of the SVM that you specified in step 3. Ping is not sufficient as it can cache the results and may not actually query the DNS Server.

All clients will also need to have fully resolvable DNS entries. Verify that everything is being registered correctly and can be resolved. If there are any errors then they will need to be corrected before continuing as mounts will fail.

  1. Check the configuration of the accepted and default ticket types in the Kerberos configuration on the client.

The clients need to know that they can use the AES128 encryption method, and also that this method takes a higher priority that other suites, such as ArcFour or DES. Check the entries that are listed in the /etc/krb5.conf file. The settings that I found to work for me have been included below. An important note is that with DoT 8.3, there is no longer a requirement to enable the Allow Weak Encryption option. AES is considered a strong encryption method.

    default_realm = TEST.DOMAIN.CO.UK
    ticket_lifetime = 7d
    default_tgs_enctypes = aes128-cts-hmac-sha1-96 arcfour-hmac-md5 aes256-cts-hmac-sha1-96 des-cbc-crc des-cbc-md5 des3-hmac-sha1
    default_tkt_enctypes = aes128-cts-hmac-sha1-96 arcfour-hmac-md5 aes256-cts-hmac-sha1-96 des-cbc-crc des-cbc-md5 des3-hmac-sha1
    permitted_enctypes = aes128-cts-hmac-sha1-96 arcfour-hmac-md5 aes256-cts-hmac-sha1-96 des-cbc-crc des-cbc-md5 des3-hmac-sha1
    dns_lookup_realm = true
    dns_lookup_kdc = true
    dns_fallback = true
    allow_weak_crypto = false

You will notice that AES128-CTS-HMAC-SHA1-96 has been brought to the front of the list. I did originally have the order as AES256/AES128/ArcFour, however this did not work. Dropping AES256 down the list enabled everything to work. I did not drop the AES256 entirely as other services are using Kerberos and are successfully using this encryption method.

After making changes to this file, you will need to restart the gssd service using the command

sudo service gssd restart
  1. Done!

At this point, with a heap of luck, you should be able to issue a mount command with the sec=krb5 option specified and have it work successfully.

If it hasn’t worked, then see the troubleshooting information below.


One of the biggest things that annoys me with articles such as this, is when you get to the end, they say it should work, and it doesn’t. You are left with a configuration that you have no idea if it is right, and no idea on how to fix. So here are a few places to look for information to solve any problems that you may hit.

This section is not exhaustive. There are probably many other tools that you could use to check out what is happening, but this is what I used to get me to the process above.

If it is not working, then there is plenty of information that you can obtain and filter through in order to determine the problem. When you have the information I often found that the problem could be identified reasonably easily.

When I hit an error, I tended to run all of these logs and then look through all of them.

  1. Netapp Filer SecD Trace

The secd module on the filer is responsible for the authentication and the name lookup. This information is useful when the filer is rejecting the credentials or if the spn is not able to be mapped to a valid user.

You first have to turn on the logging, then run your command, then turn it off.

cluster::> set diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
cluster::*> secd trace set -trace-all yes -node clusternode1

Run your mount command here

cluster::*> secd trace set -trace-all no -node clusternode1
cluster::*> event log show –source secd

If this logged an error, then the NetApp was involved in the process. These messages tended to be fairly clear and useful.

  1. Run mount with verbose mode turned on

On your Ubuntu machine, you can run the mount command in verbose mode to see what is happening.

sudo mount svm-nas:/nfs_volume /mnt/nfs_volume –o sec=krb5 –vvvv
  1. Run the RPC GSSD daemon in the foreground with verbose logging.

This is the client side daemon responsible for handling Kerberos requests. Getting the verbose output from this can show you what is being requested and whether it is valid or not. You will have to stop the gssd service first, and remember to restart the service when you are finished. You will have to run this in another terminal session as it is a blocking foreground process.

sudo service gssd stop
sudo rpc.gssd –vvvvf

Use Ctrl+C to break when finished.

sudo service gssd start
  1. Capture a tcp dump from the client side.

This allows you to look at the process from a network perspective and see what is actually being transmitted. It was through a network trace that I was able to see that the ordering of my encryption types was wrong.

sudo tcpdump –i eth0 –w /home/username/krb5tcpdump.trc

Again, this is a blocking foreground process so will need to be run in another terminal session. When you are finished the trace can be opened up in Wireshark. Specify a filter in Wireshark of the following to see only requests for your client

kerberos && ip.addr ==

Substitute the IP address for the address of your client.

When looking at the Kerberos packets, it is important to drill down, check that the sname fields, etype and any encryption settings are what you are expecting them to be. Encryption types in requests are listed in the order that they will be tried. If the first one succeeds against the AD, but is not accepted by the Netapp, then you will get access denied.

  1. Testing Name Mapping on the Netapp Cluster

A number of the errors that I was getting were related to problems with name resolution on the Netapp. These were shown clearly by using the secd trace in section a). You can test name mapping without going through the whole process of mounting directly from the Netapp.

Use the following command substituting in the SPN of the client that you want to test.

cluster::> set diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
cluster::*> secd name-mapping show -node clusternode1 -vserver svm-nas- -direction krb-unix -name maps to nfs


I doubt this post is exhaustive in covering this setup, but hopefully it is a pointer in the right direction and includes some useful information on troubleshooting.

If you have any suggestions on items that could be added to the troubleshooting, or information that you think is missing from the guide, please let me know and I can update.

Reference Materials

TR-4073 Secure Unified Authentication for NFS –

TR-4067 Clustered Data ONTAP NFS Best Practice and Implementation Guide –

Requirements for configuring Kerberos with NFS –

rpc.gssd(8) – Linux man page –

krb5.conf –

Encryption Type Selection in Kerberos Exchanges –

Kerberos NFSv4 How To –

Be the first to like.
Posted in NetApp | Leave a comment

Citrix Director: Cannot Initiate Remote Assistance Session

This is more of a dialog of the process that I went through in troubleshooting an issue. This particular issue I think would be rare, as there are only a few situations where total closure from the Internet is actually required or implemented, but the process itself provides a potentially useful guide on logging and investigation of an issue end to end, from Director to delivery machine.

The error was that when you try and initiate a Remote Assistance Session from Citrix Director you get the following error after about 30 seconds:
Citrix Shadowing Error 1
This coincides with the following event in the Application log of the Director server:
Citrix Shadowing Error 2
And on the machine that you are attempting to shadow:
Citrix Shadowing Error 3


Start by looking in the IIS Logs on the Citrix Director server. These are in c:\inetpub\logs\LogFiles\W3SVC1. Open the most recent log file and search up from the end of the file for the following line:


The line returned should look something like this:

2015-02-26 09:39:25 POST /Director/service.svc/web/ShadowSession - 443 username Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:35.0)+Gecko/20100101+Firefox/35.0 500 0 0 37343

The values which we care about are the 4 space separated values at the end of the line. These 4 values correspond to the following 4 headings:

sc-status sc-substatus sc-win32-status time-taken

The SC Status of 500 means that an internal server error occurred. The last value is the time taken in milliseconds. If the value is above 30000 then the response was not received quickly enough and a timeout occurred. This timeout is the WCF timeout, which is the default of 30 seconds. In this case, the total time taken was 37343. This indicates that there was a timeout in the request.

The next step is to enable Citrix Director Logging. This will log details of all of the various calls that are made to the Desktop Delivery Controllers from the Director server. The Director server does not communicate with the Delivery machines in any way, all requests are processed by the DDC.

To enable Citrix Director Logging on the Director server:

  1. Create a folder called C:\Logs
  2. Assign Modify permissions to the INET_USR account
  3. Open IIS
  4. Browse to Sites -> Default Web Site -> Director
  5. Select Application Settings
  6. Set the following 2 properties:
    1. FileName C:\logs\Director.log
    2. LogToFile 1
  7. Restart IIS

Now that logging is enabled, you can retry the attempt to shadow the session through the Director web interface. Make a note of the rough time that you click on the Shadow button, it will help in verifying that you are looking at the right record in the log files. Once you have replicated the error, you can open the log file that should have been generated.

In the open file, starting from the bottom, search for: ENTRY: ShadowSession. You should be taken to a row that looks similar to this.

02/26/2015 12:03:01.2926 : [t:9, s:5xj2ur0kvvzhahemlqraow30] ENTRY: ShadowSession service called

The first entry inside the square brackets represents a thread number. All actions happen on a thread. In this case the thread number is 9. This information is useful in tracking the various log items as all related entries will have occurred on the same thread number. About 20 or so lines further down the log file you should see the PowerShell equivalent command that the DDC will have executed in order to start the Shadowing request. It should look similar to this:

02/26/2015 12:03:01.6833 : [t:9, s:5xj2ur0kvvzhahemlqraow30] PowerShell equivalent: New-BrokerMachineCommand -Category DirectorPlugin -Synchronous -MachineUid 78 -CommandName GetRAConnectionString -CommandData (New-Object System.Text.ASCIIEncoding).GetBytes('<GetRAConnectionStringPayload xmlns="" xmlns:i=""><SessionId>fc267736-3565-4642-95e1-d7a85d789ce9</SessionId></GetRAConnectionStringPayload>') | foreach {(New-Object System.Text.ASCIIEncoding).GetString($_.CommandResponseData)}

Right this second we are interested in the next log line on this thread, which should tell you whether or not this command was successful. Chances are if you are reading this then it was not! In the issue we describe here the following error was logged:

02/26/2015 12:03:34.6052 : [t:9, s:5xj2ur0kvvzhahemlqraow30] TimeoutException caught: The request channel timed out while waiting for a reply after 00:00:30. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.
02/26/2015 12:03:34.6052 : [t:9, s:5xj2ur0kvvzhahemlqraow30] Connector has faulted. Disposing.

At this point, we know it failed, and we know it timed out. This ties up with the timeout value we observed in the first log file. We also have the PowerShell equivalent command which is being run from the DDC. The next step is to verify that the problem is not with the Director server.

Log on to your DDC, open up an elevated PowerShell prompt. We are going to import the Citrix Snap-Ins, and then run the command above to get an idea of how long it is taking.

Add-PSSnapIn *Citrix*

Then copy in the PowerShell command, prepending the statement with a ‘Measure-Command’. It will look like this:

Measure-Command { New-BrokerMachineCommand -Category DirectorPlugin -Synchronous -MachineUid 78 -CommandName GetRAConnectionString -CommandData (New-Object System.Text.ASCIIEncoding).GetBytes('<GetRAConnectionStringPayload xmlns="" xmlns:i=""><SessionId>fc267736-3565-4642-95e1-d7a85d789ce9</SessionId></GetRAConnectionStringPayload>') | foreach {(New-Object System.Text.ASCIIEncoding).GetString($_.CommandResponseData)}}

P.S Don’t forget the additional trailing } that is needed.

After running the command you should have been told how long the command is taking to be run from the DDC. In my case it was always returning 32 seconds. Anything over 30 will be a timeout. If it is a timeout then you can quite safely say that the Director server is not the issue, as all it is doing is reporting the failure of another component. If you find that the system is not timing out, and the request is working, then you will need to investigate communications between the Director Server and the delivery controllers.

Next up is determining what is going on between the DDC and the Delivery Machine. I guessed there must be some form of communication breakdown from the DDC. I opened a copy of WireShark portable on the DDC. Started a capture, and re-ran the PS command from above. Again I had the 32 second timeout.

To make more sense of the results, I applied a filter limiting it to communications to and from the delivery machine with this filter (ip.dst == || (ip.src ==

What was returned was a number of successful HTTP requests. There was a number of request at the start of the transmission, about 30 seconds of nothing, and then a handful of exchanges at the end. No failures or retransmissions. I removed the filter and scanned through the remaining entries within this 30 second period, and again nothing strange popped out. (Thankfully this was on a development system, so the amount of traffic generated was actually negligible).

Although at this point I could not be certain that the DDC was not the culprit, I felt the problem actually had to be with the delivery machine. There were no failures being logged on the DDC, no dropped packets or retransmissions, nothing out of the ordinary.

I must add that at this point I did enable logging on the DDC, but I quickly turned it off again. The volume of information in the logs is just overwhelming, and I could not find a way to track requests through the logs. Logging back off, I moved on to the delivery machines.

I started on the delivery machines again with the WireShark trace. I wanted to confirm what I had seen on the DDC matched what was happening on the delivery machines. I started a trace, and again ran the PowerShell script above. I could see the same exchange of HTTP communications, again with the 30 second break in the communications.

Removing the filter though, I was able to see on this machine a couple of requests, which had a number of retries. After the 30 seconds was up, these retries stopped. To prove this, I retried the command with the capture enabled another 3 times. The same couple of IP addresses was attempted to be contacted every time for 30 seconds, before the failure message appeared on the Director.

Each of these requests was an attempt to contact the online Windows Certificate revocation list. The DNS resolved successfully, but attempts to connect were being dropped by the firewall protecting the network. As I mentioned earlier, this is a closed network, with no Internet access for the clients that the network contains.

Each time that a request to shadow was received, the attempt to get the certificate revocation list would be made. This process took about 30 seconds, and the remaining 2 seconds is lost in negotiations and connections between the various servers.

The solution in our particular case was to use Group Policy to tell the clients that they could not use Internet communications, as well as the firewall which enforced that. There seems to be an inherent assumption in Windows 7 that it will be able to contact the Internet unless you tell the client explicitly that it cannot.

The setting is

Computer Management\Administrative Templates\System\Internet Communication Management\Restrict Internet Communication

There are a number of other settings in the next folder, but only this one seemed to stop the CRL check that Remote Assistance was performing.

Hopefully this is somewhat useful in the tracing of errors that you may have though, even if this is not the root cause of your issue.

1 person found this post useful.
Posted in Citrix | Leave a comment

Installing Nimble Connection Manager Toolkit Silently

If you want to install the Nimble Connection Manager for Windows silently, you will need to specify a couple of options at the command line:

Setup-NimbleNWT-x64. /S /v/qb- INSTALLDIR=\""C:\Program Files\Nimble Storage\"" NLOGSDIR=\""C:\Program Files\Nimble Storage\Logs\"" /norestart

The important section is the NLOGSDIR. If this option is not specified then you will get a MSIEXEC Error 1606: Could not access network location 0. I chose to specify the INSTALLDIR as well so that I knew exactly where everything was going.

Be the first to like.
Posted in Nimble | Leave a comment

PowerShell: Running processes independently of a PS Session on Remote Machines

PowerShell remoting is a great way of utilising commands and processing power of remote systems all from one console. It is also good at pulling information from remote systems and collating this together. There are plenty of examples of using PSSessions, and the Invoke-Command functions to manipulate remote machines, bring down remote modules to work with locally, etc.

One of the shortcomings that I have come across, is the apparent inability to create a long running job on a remote session.

For example, I have a function that performs some processing, which can take anywhere from 20 minutes to 6 hours – depending on the amount of information that is supplied. This job is self contained – and reports itself by email, so once it is started there is no interaction.

I attempted to create a PSSession, and use Invoke-Command. This started the remote job successfully, however, when I closed my local instance of the shell window, the remote process also stopped.

Using Invoke-Command to start a process on the remote machine, something like the snippet below, exhibited the same result.

$Script = {Start-Process -FilePath C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -ArgumentList "-Command Get-Service"}
Invoke-Command -ComputerName remotepc -ScriptBlock $Script

I tried a number of variations of this, exhausting all of the options relating to both sessions and invoked commands, but nothing I found actually achieved my goal.
Looking outside of these commands, I found that the WMI Classes expose the Win32_Process class, which include a Process.Create method. PowerShell can interact with WMI well, so after some quick testing, I found this method created a new process on the remote machine which did not terminate when my local client disconnected.
I was able to wrap this up into a nice little function that can be re-used. It exposes the computer name, credentials and command options. The example included shows how you can start a new instance of PowerShell on the remote machine which can then run a number of commands. This could be changed to run any number of commands, or, if the script gets too long you could just get PowerShell to run a pre-created script file.

# ----------------------------------------------------------------------------------------------------------
# PURPOSE:    Starts a process on a remote computer that is not bound to the local PowerShell Session
# VERSION     DATE         USER                DETAILS
# 1           17/04/2015   Craig Tolley        First version
# ----------------------------------------------------------------------------------------------------------

<# .Synopsis     Starts a process on the remote computer that is not tied to the PowerShell session that called this command.      Unlike Invoke-Command, the session that creates the process does not need to be maintained.      Any processes should be designed such that they will end themselves, else they will continue running in the background until the targeted machine is restarted.  .EXAMPLE    Start-RemoteProcess -ComputerName remotepc -Command notepad.exe    Starts Notepad on the remote computer called remotepc using the current session credentials .EXAMPLE     Start-RemoteProcess -ComputerName remotepc -Command "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -Command ""Get-Process | Out-File C:\Processes.txt"" " -Credential DOMAIN\Username     Starts Powershell on the remote PC, running the Get-Process command which will write output to C:\Processes.txt using the supplied credentials #>
function Start-RemoteProcess {
        [Parameter(Mandatory=$true, Position =0)]
        [Parameter(Mandatory=$true, Position =1)]

        [Parameter(Position = 2)]
        [System.Management.Automation.CredentialAttribute()]$Credential = [System.Management.Automation.PSCredential]::Empty

    #Test that we can connect to the remote machine
    Write-Host "Testing Connection to $ComputerName"
    If ((Test-Connection $ComputerName -Quiet -Count 1) -eq $false) {
        Write-Error "Failed to ping the remote computer. Please check that the remote machine is available"

    #Create a parameter collection to include the credentials parameter
    $ProcessParameters = @{}
    $ProcessParameters.Add("ComputerName", $ComputerName)
    $ProcessParameters.Add("Class", "Win32_Process")
    $ProcessParameters.Add("Name", "Create")
    $ProcessParameters.Add("ArgumentList", $Command)
    if($Credential) { $ProcessParameters.Add("Credential", $Credential) }     

    #Start the actual remote process
    Write-Host "Starting the remote process."
    Write-Host "Command: $Command" -ForegroundColor Gray

    $RemoteProcess = Invoke-WmiMethod @ProcessParameters

    if ($RemoteProcess.ReturnValue -eq 0) 
        { Write-Host "Successfully launched command on $ComputerName with a process id of $($RemoteProcess.ProcessId)" }
        { Write-Error "Failed to launch command on $ComputerName. The Return Value is $($RemoteProcess.ReturnValue)" }

Once caveat of this approach is the expansion of variables. Every variable will be expanded before it is piped to the WMI command. For straight values, strings, integers, dates, that is all fine. However any objects need to be created as part of the script in the remote session. Remember that the new PowerShell session is just that, new. Everything that you want to use must be defined.

This code can be used to run any process. Generally you will want to ensure that you specify the full path to any executables. Remember that any paths are relative to the remote server, so be careful when you specify them.

Should you use this code to run PowerShell commands or scripts then you will need to keep a check on any punctuation that you use when specifying the command. Quotes will need to be doubled to escape them for example. This requires testing.

Also be aware, that this code will start a process, but there is nothing to stop it. Any process should either be self-terminating, or you will need to have another method of terminating the process. If you start a PowerShell session, they will generally terminate once the commands specified have completed.

2 people found this post useful.
Posted in Powershell | Leave a comment

PowerShell: Using AlphaFS to list files and folder longer than 260 characters and checking access

PowerShell is great. However, it has a couple of limitations – either by design or inheritance that are annoying to say the least. One commonly documented failing, which is inherited from the .NET framework is its inability to access files that have a total path length over 260 characters. Another limitation is the linear nature in which commands are executed.

The first issue here is a major issue, particularly when working with network file systems, roaming profiles or any area where longer path lengths exist. Having Mac or Linux users on your network means that path lengths over 260 characters are more likely, as both of these systems support long path names.

There is a very good library available which can help overcome the 260 character limit. It implements most of the .NET framework functions for accessing files and folders, without the path length limitation. It’s a great addition to any project that accesses files and folders.

I have been working on a project to migrate users who are still using roaming profiles to using folder redirection. Some scripting has been required to automate the process and minimise user interaction. This is being done using PowerShell. One of the components of the script involves finding how many files and folders existed, how big they are, and whether or not we had access to read them.

PowerShell could do this.

Get-ChildItem $path -Recurse -Force

can list all the files and the sizes (Length property). Piping that list to a

Get-Content -Tail 1 -ErrorAction SilentlyContinue -ErrorVariable $ReadErrors | Out-Null

will give you a variable that lists all files that have any errors. All good.

This command is susceptible to the path limit thought. It is also slow. Each item is processed in order, one at a time. Whilst getting just the end of a file is quick, this whole command still takes time. Running against a 200MB user profile, took it over 2 minutes to list all files with sizes into a variable and give me a list of files that have access denied. With over 2TB of user profiles to migrate, that was too long.

With this method out of the window, I looked at using some C# code that I could import. The .NET framework offers a host of solutions to processing this sort of data. I ended up with the function below. It uses the AlphaFS library to get details of the files and directories. This removed the limitation of the path length. Also, as I was using the .NET Framework, I could use File.Open(). This just opens the file without reading it. It still throws an access denied error if it cannot be read, just quicker. This whole process could then be combined into a Parallel For Each loop. Directories and files can be recursed concurrently. The result was a scan of a 200mb profile in around 10 seconds – a much more acceptable time.

The code could be used in a C# project, or in the format below it can be included in a PowerShell script. You will need to download the AlphaFS library and put it in an accessible location so that it can be included in your script.

# Start of File Details Definition
$RecursiveTypeDef = @"
using System;
using System.Collections;
using System.Collections.Generic;
using System.Data;
using System.Threading.Tasks;
using System.Diagnostics;
using System.Linq;

public class FileDetails
    public List GetRecursiveFileFolderList(string RootDirectory)
        m_FileFolderList = new List();
        return m_FileFolderList;

    private List m_FileFolderList = new List();

    private void m_GetFileDetails(string DirectoryName)
        List AllFiles = new List();
        List AllFolders = new List();

        FileInfo FI = new FileInfo();
        FI.FileName = DirectoryName;
        FI.Type = Type.Directory;
        FI.FileSize = 0;
        FI.ReadSuccess = true;
        try {
            AllFiles = Alphaleonis.Win32.Filesystem.Directory.GetFiles(DirectoryName).ToList();
        } catch {
            FI.ReadSuccess = false;
        try {
            AllFolders = Alphaleonis.Win32.Filesystem.Directory.GetDirectories(DirectoryName).ToList();
        } catch {
            FI.ReadSuccess = false;
        lock (m_FileFolderList) {

        Parallel.ForEach(AllFiles, File =>
            FileInfo FileFI = new FileInfo();
            FileFI.FileName = File;
            FileFI.Type = Type.File;
            try {
                FileFI.FileSize = Alphaleonis.Win32.Filesystem.File.GetSize(File);
                FileFI.ReadSuccess = true;
            } catch {
                FileFI.ReadSuccess = false;
            lock (m_FileFolderList) {

        Parallel.ForEach(AllFolders, Folder => { m_GetFileDetails(Folder); });

    public struct FileInfo
        public long FileSize;
        public string FileName;
        public Type Type;
        public bool ReadSuccess;

    public enum Type

#Update the following lines to point to your AlphaFS.dll file.
Add-Type -Path $PSScriptRoot\AlphaFS.dll
Add-Type -TypeDefinition $RecursiveTypeDef -ReferencedAssemblies "$PSScriptRoot\AlphaFS.dll", System.Data

# End of File Details Definition

# Use of the function: 
$FileInfo = New-Object FileDetails
$Info = $FileInfo.GetRecursiveFileFolderList("C:\Windows")
$Info | Format-Table -Autosize -Wrap

This will output a full file and directory list of the C:\Windows directory. The property ReadSuccess is true if the file could be opened for reading.

Plenty of scope to modify this to meet your needs if they are something different, but an example of how you can bring in the power of the .NET Framework into PowerShell to help really boost some of your scripts.

1 person found this post useful.
Posted in C#, Powershell, Programming | 2 Comments

‘You Have Been Logged On With a Temporary Profile’ when all profiles have been redirected to a specific location

This is a very strange issue, which I think will only affect a handful of people, and only those who have the right mix of configurations as described below.

Users logging on to a Windows 7 machine received the following popup:

Temporary ProfileThis message implied that there would be some informative details in the Event Log, unfortunately in this situation, nothing. No errors, no warnings, no information.

On this particular machine we were using the following GPO setting to force users to a specific roaming profile location. The machines are all sat inside a controlled network so access to the normal profile was not allowed.

Computer Configuration –> Administrative Templates –> System –> User Profiles –> Set roaming profile path for all users logging onto this computer

In the ProfileList key in the registry you can see the location that has been configured for the Central Profile (i.e the server copy of the roaming profile). Checking out the key for the specific user showed the following. The value can be found at: HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\SID.

UserProfileRegKeyThe GPO was only configured with \\server\profiles$\%username% though. The addition of the Domain component into the path was unexpected.

After clearing all the profiles from the local machine, and rebooting, thinking that something must be corrupt, the issued recurred. Running a ProcMon against the system at boot time and tracking the change to this key showed the user profile service creating the CentralProfile value and populating it with the wrong value from the start.

This machine is quite heavily managed, and this involves running a couple of PowerShell scripts as scheduled tasks at startup. We had configured the tasks to run as local only, as they did not require any access to network resources. They were configured as below:

User Profile - Scheduled Task For some reason, even though this task was set to run locally, it was influencing the location of the roaming profile. Most strangely, it wasn’t just influencing the path of the profile for the account that was configured in the scheduled task, it was influencing all user accounts that logged on to the machine.

The fix for us was fortunately very simple. The job that the task was doing could quite easily be achieved by using the local SYSTEM account. After changing the task credentials, I did have to clear out all of the profiles from the system to remove the incorrect values, but since this change, the accounts have all loaded the correct profiles from the correct locations.

Be the first to like.
Posted in Windows 7 | Leave a comment Highlighting a search term in a DataGridView

I’m building a search form into an application that has a database back end. I managed to configure a nice little search which takes some user input, and then modifies the results shown in a DataGridView control. However, not being satisfied with just showing a subset of results, I wanted to be able to highlight the values that had matched so that it was clearer for the end user to see why the records still in the view were there.

This, it turns out, is not as simple as I hoped. However, I have now got it working. The form has a text box which has a sub for validating the input. This runs the actual search. The bit we are interested in here though is that we handle the CellPainting event on the DataGridView control, and then customise the painting of the control to meet our needs.

To give you an idea of what the highlighted form looks like:Highlighting in a DataGridView


This is the code that does the work. It is designed to not be case sensitive, and to pick up multiple occurrences of a string in the cell. It is well commented so that you can see what is going on:

''' <summary>
    ''' Highlight the currently entered search filter in the results to show how it was matched
    ''' </summary>
    ''' <param name="sender"></param>
    ''' <param name="e"></param>
    ''' <remarks></remarks>
    Private Sub dgv_Results_CellPainting(sender As Object, e As DataGridViewCellPaintingEventArgs) Handles dgv_Results.CellPainting

        'If there is no search string, no rows, or nothing in this cell, then get out. 
        If txt_SearchFilter.Text = String.Empty Then Return
        If (e.Value Is Nothing) Then Return
        If e.RowIndex < 0 Or e.ColumnIndex < 0 Then Return

        e.Handled = True
        e.PaintBackground(e.CellBounds, True)

        'Get the value of the text in the cell, and the search term. Work with everything in lowercase for more accurate highlighting
        Dim str_SearchTerm As String = txt_SearchFilter.Text.Trim.ToLower
        Dim str_CellText As String = DirectCast(e.FormattedValue, String).ToLower

        'Create a list of the character ranges that need to be highlighted. We need to know the start index and the length
        Dim HLRanges As New List(Of CharacterRange)
        Dim SearchIndex As Integer = str_CellText.IndexOf(str_SearchTerm)
        Do Until SearchIndex = -1
            HLRanges.Add(New CharacterRange(SearchIndex, str_SearchTerm.Length))
            SearchIndex = str_CellText.IndexOf(str_SearchTerm, SearchIndex + str_SearchTerm.Length)

        ' We also work with the original cell text which is has not been converted to lowercase, else the sizes are incorrect
        str_CellText = DirectCast(e.FormattedValue, String)

        ' Choose your colours. A different colour is used on the currently selected rows
        Dim HLColour As SolidBrush
        If ((e.State And DataGridViewElementStates.Selected) <> DataGridViewElementStates.None) Then
            HLColour = New SolidBrush(Color.DarkGoldenrod)
            HLColour = New SolidBrush(Color.BurlyWood)
        End If

        'Loop through all of the found instances and draw the highlight box
        For Each HLRange In HLRanges

            ' Create the rectangle. It should start just underneath the top of the cell, and go to just above the bottom
            Dim HLRectangle As New Rectangle()
            HLRectangle.Y = e.CellBounds.Y + 2
            HLRectangle.Height = e.CellBounds.Height - 5

            ' Determine the size of the text before the area to highlight, and the size of the text to highlight. 
            ' We need to know the size of the text before so that we know where to start the highlight rectangle
            Dim TextBeforeHL As String = str_CellText.Substring(0, HLRange.First)
            Dim TextToHL As String = str_CellText.Substring(HLRange.First, HLRange.Length)
            Dim SizeOfTextBeforeHL As Size = TextRenderer.MeasureText(e.Graphics, TextBeforeHL, e.CellStyle.Font, e.CellBounds.Size)
            Dim SizeOfTextToHL As Size = TextRenderer.MeasureText(e.Graphics, TextToHL, e.CellStyle.Font, e.CellBounds.Size)

            'Set the width of the rectangle, a little wider to make the highlight clearer
            If SizeOfTextBeforeHL.Width > 5 Then
                HLRectangle.X = e.CellBounds.X + SizeOfTextBeforeHL.Width - 6
                HLRectangle.Width = SizeOfTextToHL.Width - 6
                HLRectangle.X = e.CellBounds.X + 2
                HLRectangle.Width = SizeOfTextToHL.Width - 6
            End If

            'Paint the highlight area
            e.Graphics.FillRectangle(HLColour, HLRectangle)

        'Paint the rest of the cell as usual

    End Sub

1 person found this post useful.
Posted in | Leave a comment

How to Create a Specific Customized Logon Page for Each VPN vServer based on FQDN without breaking Email Based Discovery

Citrix have published a guide ( on creating a customised logon page for each virtual server, based on the FQDN received. The article works, and true to its intended aim, the sites respond on the relative FQDN and return the correctly customised login page for each of the vServers.

Once this has been completed though, the vServer that has been configured with a Responder in the NetScaler will no longer be able to use email based discovery or automatic configuration using the external store name. The error we were getting on the receiver was this:

“Your account cannot be added using this server address. Make sure you entered it correctly. You may need to enter your email address instead.”

The same error was displayed if using the email address or the FQDN of the vServer.

Disabling the Responder rule that was created following the KB allowed the configuration to work. Based on this, I fully removed the responder and in started looking for other ways to accomplish the customisation.

These are the steps that I took to enable the rewrite rule:

I am running NetScaler 10.5

Using the GUI:

1. Check that rewrite is enabled in System –> Settings –> Configure Basic Features.

2. Go to AppExpert –> Rewrite –> Actions. Create a new Action. Enter a name and set the type to Replace. In the ‘Expression to choose target location’ field, enter ‘HTTP.REQ.URL’. In the expression to replace with, you need to enter the full web address to the newly created custom logon page. In this example I have entered “”. It should look similar to the image below. Click Create when you are done. Citrix_NetScaler_Rewrite1_Action
3. Go to AppExpert –> Rewrite –> Policy. Create a new Policy. Enter a name and set the Action to the name of the action created in step 2. The Undefined-Result-Action should be set to ‘Global-undefined-result-action’. In the expression enter the following, substituting in your FQDN: ‘HTTP.REQ.HOSTNAME.CONTAINS(“”) && HTTP.REQ.URL.CONTAINS(“index.html”)’
Citrix_NetScaler_Rewrite2_Policy4. Finally, we need to bind this policy to the Global HTTP Request receiver. Go to AppExpert –> Rewrite –> Policy. Select the policy that you just created, and then click Policy Manager at the top. Accept the default settings for the Bind Point (show below for completeness). Click Continue. Select Add Binding, then choose the Policy that you created in step 3. The other details can be left as default, and click Bind, then click Done in the Policy Manager.
5. Test, and hopefully all will work.

Using the CLI:
1. enable feature rewrite
2. add rewrite action REWRITE_ACT replace “HTTP.REQ.URL” “\”\””
4. bind rewrite global REWRITE_POL 1 END -type REQ_DEFAULT
5. Test

Following this, both the custom page redirection, and email based discovery both working as they should.

Be the first to like.
Posted in Citrix | Leave a comment
  • Tags

  • Categories

  • My LinkedIn Profile

    To see my LinkedIn profile, click here:

    Craig Tolley
  • November 2015
    M T W T F S S
    « Jul    
  • Meta

  • Top Liked Posts

    Powered by WP Likes

Swedish Greys - a WordPress theme from Nordic Themepark.