Website Redesign

Today the new Avian Waves website is now online!  I'm now using DotNetNuke for CMS.  The blog is now using SunBlogNuke and the forums system is still YAF.Net.  If you had previously created a forums account, that account no longer exists since the authentication systems were not compatible.  You will need to create a new account.  If you create an account with the same username and email address, your previous settings should still be available.  Enjoy!

OpsMgr 2012: Recalculate Health on all Agents

It’s easy to do with PowerShell.

Get-SCOMAgent | foreach { $_.HostComputer.RecalculateMonitoringState() }


FortiNet Fortigate Shenanigans

At work we standardized on Fortigate firewalls a while back because they are feature packed, easy to use, and reliable units at a very affordable price.   Compared to Juniper and Cisco, it was night and day. 

Recently, we purchased some new Fortigate 80C units for our internal firewall replacement and I decided it was time to dive into the FortiOS 5.0 since these were fresh installs.  As I was mapping out the virtual IPs to our back end servers, I ran into a strange issue.  The unit was telling me it ran into its virtual IP limit at 50.  In previous OS versions, that limit was 500.  Yes, you read that right: by upgrading to the new version of the OS, you have a ten-fold decrease in the number of virtual IPs you can map! 

I couldn’t believe that was true – it must have been a soft limit and I was missing something.  So I called support.  They didn’t think it was true either because they could see in the documents that it did, indeed, decrease from 500 to 50 for the latest version.  They suspected it might be a bug.  So I had them escalate it to senior engineer.  Here is their official response.


Unfortunately, 50 VIPs is the maximum limit for the size of unit that you have. Due to the change in OS and the features
that are now provided in the device, the limits have been set so that the device is not overloaded and eventually causing it to
go into conserve mode. This has been confirmed by a senior engineer and unfortunately there are no work arounds to this issue.


Fortinet TAC Americas

What a load of BS!  A handful of new features necessitated reducing the maximum VIP count by an order of magnitude even if you aren’t using the new features?  Shenanigans!

The truth is that they are trying to force users to upgrade to their higher end (read: more expensive) models since they market the 80C more as a branch office type of unit, even though, spec-wise, it is more than capable of being a front end firewall for internet servers.  I don’t blame the engineers.  They made a fine product.  The problem is that some suit up the chain ran some actuaries and saw that people were buying the 80C instead of units that cost two to three times as much from Juniper and Cisco and they want a slice of that delicious pie.  I think it might backfire, though.

This sort of corporate behavior pisses me off so much that unless this is changed in the future, I can’t ever recommend Fortigate again.  Who knows when or if they’ll change other limits arbitrarily some day and you get screwed by an OS upgrade?

I didn’t ask, but I wonder what happens if somebody already had, say, 150 VIPs configured and they perform an upgrade?  Does it just truncate the last 100 and call it a day?

Meanwhile, I downgraded to FortiOS 4.0 MR3 and this should work just fine for our planned lifetime for this equipment.  Maybe SonicWall is in my future…


PowerShell / WMI: Free Disk Space from a Cluster Shared Volume (CSV) in a Windows Failover Cluster

There are a great set of PowerShell cmdlets for Failover Clusters, but what if you just want some information about your Cluster Shared Volumes on a  remote computer without installing those cmdlets?  There’s an easy way with WMI.

Get-WmiObject -Impersonation Impersonate -Authentication PacketPrivacy -ComputerName "SERVERNAME" -Namespace "root\MSCluster" -class "MSCluster_DiskPartition" | where {$_.VolumeLabel -eq "VOLUMENAME"} | select -first 1 | select -Expand FreeSpace

In the above snippet, change SERVERNAME to one of the cluster nodes and VOLUMENAME to the volume label of the CSV you want to examine.  Of course, you don’t have to select a single volume if you want information from all the cluster volumes.  I did it this way because I only wanted to look a the CSV and not the quorum drive.  The above returns a single integer representing the free space for use later on in my script.

The impersonation and authentication settings are required for remote access but not local access.

Adapt the above to suit your needs. :-)


PowerShell: Quickly Finding the Oldest and Newest Files in a Folder

I whipped up this script to quickly find the oldest and newest files in a folder with PowerShell because we have some archive folders that have millions of files and it can crash Windows Explorer.  Other scripts I’ve seen online use PowerShell’s Where-Object after doing a sort on the entire collection, but that’s inefficient because it requires sorting millions of file records, which is slow.  What I’m doing is using ForEach-Object to track the oldest and newest dates as I parse through the directory list in whatever order it comes to me.  It saves a lot of time and memory.  Enjoy!

$olddate = [DateTime]::MaxValue $newdate = [DateTime]::MinValue $oldfn = "" $newfn = "" $path = "." get-childitem $path | ForEach-Object { if ($_.LastWriteTime -lt $olddate -and -not $_.PSIsContainer) { $oldfn = $_.Name $olddate = $_.LastWriteTime } if ($_.LastWriteTime -gt $newdate -and -not $_.PSIsContainer) { $newfn = $_.Name $newdate = $_.LastWriteTime } } $output = "" if ($oldfn -ne "") { $output += "`nOldest: " + $olddate + " -- " + $oldfn } if ($newfn -ne "") { $output += "`nNewest: " + $newdate + " -- " + $newfn } if ($output -eq "") { $output += "`nFolder is empty." } $output + "`n"





Using a Reverse Proxy to Automatically Force External Lync Meeting Guests to Use Silverlight Client

Microsoft, in their infinite wisdom, designed Lync in such a way that if members of two organizations deploy Lync and try to schedule meetings with each other, Lync will use federation in order negotiate authentication between the two domains.  This is great if you have a federated relationship with all your partners that you want to hold meetings with.  But what if you want to do ad hoc meetings with unauthenticated guests?  Microsoft gives you two choices.  One is to allow automatic discovery of federated partners, where the Lync servers will negotiate with each other based on published DNS and other settings, and the other is to log into the meeting using the Silverlight client.

There’s just one problem.

If you have the Lync desktop client on your PC and you try to visit an external meeting link, such as https://lync.contoso.com/meet/username/EJHFSN and you are not a part of the Contoso organization and you do not have federation set up or do not allow automatic discovery of federated partners, it will fail with a useless numeric error code that means absolutely nothing.  Since the desktop client does not allow you log on anonymously, it will never fallback to guest logon, even if the meeting organizer has it enabled for the meeting.

TechNet to the rescue!  All you have to do is append “sl=1” to the end of the query string of the URL, so that you visit https://lync.contoso.com/meet/username/EJHFSN?sl=1 and then it will force the Silverlight client, which will allow you to log on anonymously.  In this scenario, Lync meetings then behave basically like WebEx or GotoMeeting, where external participants need a browser plugin to connect to the meeting.  Perfect.  That’s exactly what I want.

Again, one problem.  Imagine trying to get your entire staff to always remember to append that to the meeting link when they set up external meetings.  Despite best efforts, it’s just not going to happen.  Your CFO has better things to do and she will forget, because that is human nature.  And, really, this is Microsoft’s shortsightedness here.  You can read my comment at the TechNet article linked above.

Thanks for the "?sl=1" trick. That did the trick for me. But explaining this to my users is going to be a pain. Imagine me in the CFO's office after months of extolling the virtues of Lync and how we even got rid of our WebEx subscription because, heck, Lync does meetings too! But suddenly, a meeting participant is also using Lync at his company but we have no federated relationship with each other, so when we click on each other's meeting links it just fails with a terrible numerical error. "I thought this thing could replace WebEx," the CFO bellows, scowling at me in disdain. "Oh, it can," I reply, "just make sure you modify every meeting invitation so that the URL has ?sl=1 at the end of it!" Yea, that will go over well.

Thankfully, there is a workaround.  And due to the way Lync is designed, it’s really not difficult to set up.

When you set up your Lync websites, it creates an internal and external site.  The external site by default uses the non-standard ports 8080 and 4443.  The Lync best practice is to use a Reverse Proxy or firewall port forwarding rules to send traffic destined for the normal web ports to the Lync alternate ports.  Your internal users, on the other hand, use ports 80 and 443 as normal, directly communicating with the Lync server.

Reverse proxies can also be set up to modify URLs before the connection is sent to the backend.  This is known as URL Rewriting.  In this case, you want a URL rewrite rule that will modify connections to /meet/ such that ?sl=1 is always added to the end.  I found from trial and error that you get the best results by only modifying the /meet/ part of the above URL (assuming you are using Simple URLs like that).  So I set up my topology so that 8080 and 4443 were exposed directly to the outside so I have an option to bypass the reverse proxy once the connection is established.  This is all completely secure and transparent to the end user.  We’re not bypassing the firewall, just the reverse proxy’s URL rewriting when it is not needed.

So the final topology looks like this.  (The Lync Front End is either your Edge server or your single server depending on the size of your deployment.)

Lync Diagram

From outside my firewall, ports 80, 443, 8080, and 4443 are all open.  If you connect to 80 or 443, you are sent to the reverse proxy.  If you go to 8080 or 4443, you are sent directly to Lync.

To prepare Lync for this configuration, I first edited the topology so that the published ports are assigned the same as the internal (8080 and 4443) as this will allow us to bypass the reverse proxy when it is not needed.


Whenever you publish your topology, remember to rerun the Lync setup wizard.

The reverse proxy can be easily created using IIS.  In fact, you can set it up on your Lync edge server if you want.  It depends on your workload.  For the purposes of this post, we’ll assume you are setting it up on the same server.   Note: Lync will stop any non-Lync website in IIS whenever you publish your topology and rerun setup, so be prepared for this!

In order to configure the reverse proxy, you need to install the Application Request Routing and URL Rewrite extensions for IIS.  These both should already be installed if you are using your Lync server.

Enable the Application Request Routing.  This is done at the server level.  Click on your IIS server in the IIS manager, double click Application Request Routing Cache, then click on Server Proxy Settings.  Check Enable proxy and keep everything else at defaults.


Create a new website.  Give it a folder path that is not shared with any other site (i.e., don’t reuse C:\Inetpub\wwwroot).  The bindings should be whatever the external IP address is mapped to through your firewall.  Bind HTTP and HTTPS on the default ports.  Make sure you use a different internal IP address than your Lync internal website so there isn’t a collision.  You don’t want internal users going through the reverse proxy.

Go into the site’s URL Rewrite section and create a dummy rule.  We are going to overwrite this later, so it doesn’t matter what it is.  We just want to create a web.config that we can edit by hand.

Edit the web.config “rules” section for the reverse proxy site.  Now here is where the fun begins.  We are going to modify any request that goes to /meet/ so that it has sl=1 at the end.  I created a rule for both HTTP and HTTPS since I am using default Lync ports (non-standard web ports).  There is also a condition that if the query string already contains sl=, it will not modify it.  Underneath the /meet/ rewrites are the default rules that just pass the request through unmodified to the correct ports.  Obviously, URLs, RegEx, ports, and so on, will all need to be modified to match your environment.

<rule name="ReverseProxyInboundRule1" stopProcessing="true">
<match url="^meet/(.*)" />
<add input="{QUERY_STRING}" pattern="(.*)sl=(.*)" negate="true" />
<add input="{CACHE_URL}" pattern="^(https)://" />
<action type="Rewrite" url="{C:1}://lync.contoso.com:4443/{R:0}?sl=1&{QUERY_STRING}" appendQueryString="false" logRewrittenUrl="true" />
<rule name="ReverseProxyInboundRule2" stopProcessing="true">
<match url="^meet/(.*)" />
<add input="{QUERY_STRING}" pattern="(.*)sl=(.*)" negate="true" />
<add input="{CACHE_URL}" pattern="^(http)://" />
<action type="Rewrite" url="{C:1}://lync.contoso.com:8080/{R:0}?sl=1&{QUERY_STRING}" appendQueryString="false" logRewrittenUrl="true" />
<rule name="ReverseProxyInboundRule3" stopProcessing="true">
<match url="(.*)" />
<add input="{CACHE_URL}" pattern="^(https)://" />
<action type="Rewrite" url="{C:1}://lync.contoso.com:4443/{R:1}" appendQueryString="true" logRewrittenUrl="true" />
<rule name="ReverseProxyInboundRule4" stopProcessing="true">
<match url="(.*)" />
<add input="{CACHE_URL}" pattern="^(http)://" />
<action type="Rewrite" url="{C:1}://lync.contoso.com:8080/{R:1}" appendQueryString="true" logRewrittenUrl="true" />

If you attempt to connect to a meeting externally now, this is what happens.

  1. Browser initiates connection to https://lync.contoso.com/meet/username/EJHFSN.
  2. Reverse Proxy receives the request, adds sl=1 to the query string, and passes the request to the external Lync website at https://lync.contoso.com:4443/meet/username/EJHFSN?sl=1.
  3. Lync server replies and tells the browser to load the Silverlight Lync client which then attempts to connect directly to the lync web services (bypassing the Reverse Proxy) at https://pool1.lync.contoso.com:4443/Reach/Client/WebPages/ReachClient.aspx.
  4. The external user can join as an anonymous guest, or log on using the domain credentials of the organizer’s meeting, if they have that.  The desktop Lync client will never launch!

Hopefully in the future Microsoft will fix the desktop client to allow it to log on anonymously to external meetings and also give us a checkbox in the Lync Server Control Panel that allows us to force all external connections to the Silverlight client (for legacy organizations that might connect to ours).


Elfen Lied: One Messed Up (But Awesome) Anime

I just finished watching the Elfen Lied anime, which is based on a fairly long running manga.  The anime was only 13 episodes long, with one 30 minute OVA being produced afterwards.  This anime has been out for a while, so I’m a bit late to the party in commenting on it, but it was such a shocking series, I felt it deserved its own post.

The anime opens with Lucy, a young girl with horns, pink hair, and strange powers, fully nude walking through a building that looks like a cross between a bomb shelter and insane asylum.  As she walks down the hallways, she is ripping guards and scientists to shreds using powers that are explained later in the series.  Blood, bones, guts, and organs splatter everywhere.  This scene sets the tone for the rest of the anime.  I’ve read critics of the series decry this portrayal as over-the-top and gratuitous or simply sex and violence fan service. 

If that’s all you watch and then turn it off, I would agree.  But as you continue to watch the first episode, the mystery of what Lucy is, what that facility she was in is, and how that all relates to the college aged teens she meets who take her in to help her, is absolutely engrossing.  That scene is not the end of the extreme violence and nudity.

As the first few episodes progress, you begin to see the reason behind the over-the-top approach to this anime.  Nearly all of the characters are absolutely tortured.  Without giving away too much of the plot, you have one character who witnessed his sister and father brutally murdered before his eyes at a young age, another who was sexually molested by her step father with her mother accusing her of lying and eventually abandoning her, one whose limbs are mutilated and is left for dead, and several who are tortured repeatedly in gruesome and inhumane ways by authority figures.  The nudity and blood is simply a way to reinforce and help push the sort of emotional twistedness of the extreme horrors the characters have been through.

Nudity in most movies or series is usually a precursor to a love scene.  In this anime it’s more closely tied to something bad, usually quite violent, happening to a character.  It’s actually quite a brilliant mind screw and a good way to get you to empathize with the torture the character is experiencing.

So, yea, not for kids, this one.

My only complaint is that while the first four episodes and last four episodes were strong, the middle was very slow.  This is a weird thing since anime is based on a rather long running manga.  You would think they had plenty of material to work with, but it’s almost like the producers and writers just lost their way in the middle.  The same fights and arguments were fought and argued again and again.  Nothing new really happened.  And the conversations just sounded very forced.  I’m not sure why this happened.  Maybe it was a brief reprieve before the torture started again?  lol.  Also, without giving away the ending, I will just say that it had a satisfying conclusion, but ended with the typical amount of loose ends that most anime series end up having. 

The OVA was apparently never released in the USA, but it’s easy to find on the internet with subtitles.  It takes place in the middle of the series and is rather light hearted compared to the series and answers a few questions that were not otherwise resolved.  Honestly, they should have cut the middle four episodes down to two and added the plot of the OVA into the middle and the pacing would have been a lot better.

So, if you can tolerate watching something that will mess with your emotions and shock you with gruesome violence and gratuitous nudity, this is an anime I have to highly recommend.  The series is on NetFlix.

On a 1-10 scale, I give this anime an 8.  Now I need to go find something to watch on NetFlix that is more uplifting and less disturbing!


Finding Stale Recovery Points in DPM 2010 Using PowerShell

I recently had a disk failure that I was using for my Microsoft Data Protection Manager 2010 (DPM) protection groups.  Unfortunately, it wasn't a recoverable error so I had to remove the disk from the pool and reallocate the affected data sources.  DPM is supposed to error out the data sources when it can't perform the backup, but I've found this is not always the case.  Only about 25% of the data sources that were on that disk ever errored out.  The rest had the happy green "OK."  Looking through the protection groups, I noticed that any data source that was no longer protected would still show the correct last recovery point from when it last succeeded (which was days ago).  When I tried to run a manual express full backup at that point, I would get an error that stated that the disk was missing and it could not perform the backup.  However, it still showed the green "OK" symbol next to the data source.

I have several hundred protected data sources and I couldn't go through them one by one, so I whipped up a PowerShell script to show me stale DPM data.  Basically, it enumerates all the data sources and compares the latest recovery point date with 24 hours ago.  If it's older than that, it outputs the protected resource so I can removed it and add it back to the protection group with a fresh volume.

The biggest gotcha I ran into is that a lot of properties returned by Get-DataSource are asynchronous, which is annoying when you are scripting.  Luckily, a TechNet blogger had a solution to that problem.  His script contains an error (a missing parenthesis), though.  I notified him so hopefully it will get fixed.  I have confirmed my script below works.

This script is not just useful for finding stale data due to a failed disk.  It could also be adapted to notify you when failed backups is close to surpassing your defined Recovery Point Objective (RPO).  DPM's internal notification system is very noisy since it's not uncommon for a backup to fail, then recover on its own very quickly.  If you manage a large DPM deployment, you are probably used to hundreds to thousands of emails awaiting you after the weekend.   I'm not using it that way yet, but I think I just might.

The output of my script looks like this:

$ds[133] System State and BMR : server1.avianwaves.com : Computer\System Protection

The $ds variable is the array that stores all the data sources used in the script.  The 133 is the index, so you can quickly query more information about the data source if you need to.  Just type "$ds[133]" at the PowerShell prompt to do so.  Immediately after the variable name is the Protection Group.  Following that is the server holding the protected resource.  Then the last part is the protected resource itself.

I hope this helps somebody out there!

# Refresh the datasource metadata. Code taken from: http://blogs.technet.com/b/dpm/archive/2010/09/11/why-good-scripts-may-start-to-fail-on-you-for-instance-with-timestamps-like-01-01-0001-00-00-00.aspx
Disconnect-DPMserver #clear object caches
$ds = @(Get-ProtectionGroup (&hostname) | foreach {Get-Datasource $_})
$ds = $ds | ?{$_} #remove blanks
for ($i=0; $i -lt $ds.count;$i++) { [void](Register-ObjectEvent $ds[$i] -EventName DataSourceChangedEvent -SourceIdentifier "TEV$i" -Action { $global:RXcount++}) }
# touch properties to trigger events and wait for arrival
$ds | select latestrecoverypoint  > $null #do not use [void] coz does not trigger
$begin = get-date
$m = Measure-Command { while (((Get-Date).subtract($begin).seconds -lt 30) -and ($RXcount -lt $ds.count)) {sleep -Milliseconds 100} }
if ($RXcount –lt $ds.count) { write-host “WARNING: Less events arrived [$RXcount] than expected [$($ds.count)]” }
Unregister-Event *

# Look for stale data
$staleDate = (get-date).AddDays(-1) # 24 hours old is our limit
$count = 0
foreach ($dsi in $ds)
  if ($dsi.LatestRecoveryPoint -lt $staleDate)
    write-host "`$ds[$count] $($dsi.ProtectionGroup.FriendlyName) : $($dsi.ProductionServerName) : $($dsi.LogicalPath)"
  $count ++


Microsoft Data Protection Manager: Start a Consistency Check on all Inconsistent Replicas with PowerShell

I'm a big fan of MS DPM and have been using it for years.  The one place where it really suffers, though, is the management console.  Even in DPM 2010, there are very few things you can do in a batch sequence, which is a common need for DPM administrators.

From time to time, your server may get a large amount of inconsistent replicas.  This is usually due to something outside of DPM's control.  For example, one DPM server I manage is a VM.  If the host server suspends the VM, then resumes and the target server it was backing up before suspend is suddenly offline (such as during a reboot cycle from updates), you can get inconsistent replicas.  There are numerous other scenarios, but you get the drift.  When this happens, the management console forces you to right click on every single alert individually and select "Run a synchronization job with consistent check."  (You can also just wait for the next automatic consistency check interval, but that's not always ideal.)

Luckily, PowerShell provides a better way to kick off a manual consistency check.  Behold!  Here is my DPM PowerShell script that kicks off a consistency check on every inconsistent replica.  This is tested with DPM 2010, but should also work with DPM 2007.

$pg = Get-ProtectionGroup
foreach ($pgi in $pg) { $ds = get-datasource $pgi; foreach ($dsi in $ds) { if ($dsi.State -eq 'Invalid') { Start-DatasourceConsistencyCheck $dsi | out-null; $dsi.ProductionServerName + " :: " + $dsi.DisplayPath } } }


Recent Comments
  1. Re: DPM 2016 + SQL 2016 and "An unexpected error occurred during the installation" ID: 4387
    Funny Guy: To add my 2 cents - after a day of fight it appears that DPM installation uses WMI queries to detect...
  2. Re: DPM 2016 + SQL 2016 and "An unexpected error occurred during the installation" ID: 4387
    Funny Guy: To add my 2 cents - after a day of fight it appears that DPM installation uses WMI queries to detect...
  3. Re: DPM 2016 + SQL 2016 and "An unexpected error occurred during the installation" ID: 4387
    Funny Guy: To add my 2 cents - after a day of fight it appears that DPM installation uses WMI queries to detect...
  4. Re: DPM 2016 + SQL 2016 and "An unexpected error occurred during the installation" ID: 4387
    kAM aCOSTA: Thanks Edward !!!
  5. Re: 3.0 is coming...
    Dave: Very Cool!
  6. Re: In VB.Net, sending output to the console from a Windows "Forms" application
    clochardM33: Glorious
  7. Re: DPM 2016 + SQL 2016 and "An unexpected error occurred during the installation" ID: 4387
    Andreas Hagberg: Edward, you are the man. It solved the problem right at the first try. Many thanks for the post.
  8. Re: DPM 2016 + SQL 2016 and "An unexpected error occurred during the installation" ID: 4387
    Derek: Edward, great find. +1 on the fix... Thanks!!
  9. Re: DPM 2016 + SQL 2016 and "An unexpected error occurred during the installation" ID: 4387
    milo: Thanx MIKE - this helped me: DPM 2016 setup will fail if you have SQL Server Management Studio (SSMS...
  10. Re: DPM 2016 + SQL 2016 and "An unexpected error occurred during the installation" ID: 4387
    Terry: Edward, you are the Man!!!! Looked for a solution for hours, then found your post and BAM!!! it worked...