Zabbix: Adding a datastore low space discovery trigger

Its been a LONG time to say the least since I posted last. Once every 7 years seems like a good rate. Anyways, life has taken a lot of turns, and I manage/tweak/etc many different kinds of systems now. We are in the middle of migrating from icinga/nagios to Zabbix. It’s definitely been a bit of a learning curve. Today I had to figure out how to add alerts on my vmware datastores so I know when space is running out.

Start by going to Data collection -> Templates -> VMware. Click on “VMWare” and then go to the Macros Tab. Add the following 2 Macros:
Macro: {$DS.CRIT}
Value: 5
Description: Datastore free % Critical Point
Macro: {$DS.Warn}
Value: 10
Description: Datastore free % Warning Point

Click on Update.

Next click on “Discovery Rules” along the top, then find the entry for “Discover VMware datastores.” Click on Trigger prototypes for that entry. In the top right of the screen, you can now choose “Create trigger prototype.” I created 2 prototypes, one for a warning, and one for a critical notification. You can choose to do as many as you want.

Fill in the following information:
Name: {#DATASTORE}: Disk space is CRITICAL
Event name: {#DATASTORE}: Disk space is CRITICAL {ITEM.LASTVALUE1}
Operational data: {ITEM.LASTVALUE1}
Severity: High
Expression: last(/VMware/vmware.datastore.size[{$VMWARE.URL},{#DATASTORE},pfree])<{$DS.CRIT}

On the “Tags” tab, enter the following:
Name: component
Value: datastore
Name: datastore

To create the second one for warning, Clone the trigger prototype (on the bottom of the trigger prototype, as seen in the screen shots), then change the names from Critical to Warning, and change $DS.CRIT to $DS.WARN in the expression, then click update.

If you don’t want the “Warning” trigger to show up when you are critical, open the warning trigger prototype, then go to “Dependencies” along the top. Click on “Add prototype” and select your critical prototype, then update.

You should now be all set!

Filed under: Zabbix | Posted on June 5th, 2023 by CharlieMaurice | No Comments »

Querying Dell Monitor Information in SCCM

Mike Terrill has a most excellent blog post on how to get Dell BIOS/UEFI information into WMI/SCCM. It involves setting up Dell Monitor, and then using SCCM to import a custom hardware class. You can find all the info here:

If you would then like to create a membership query on that info, here is an example checking to see if its set to BIOS or UEFI mode. A “CurrentValue” of 1 is BIOS, 2 is UEFI. So in this example, we want to return all machines with UEFI enabled.

First is the WQL Query:

select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_DCIM_BIOSENUMERATION on SMS_G_System_DCIM_BIOSENUMERATION.ResourceId = SMS_R_System.ResourceId where SMS_G_System_DCIM_BIOSENUMERATION.AttributeName = “Boot Mode” and SMS_G_System_DCIM_BIOSENUMERATION.CurrentValue = “2”

Here is a screenshot if you want to build it:


Filed under: Uncategorized | Posted on May 26th, 2016 by CharlieMaurice | No Comments »

Fully automate your Build and Capture using MDT

I recently gave a few presentation on how to automate your build and capture process.  Here is the outline on how to complete this yourself, as well as a few gotcha’s to watch out for along the way.  This will mostly be presented for deploying Windows 7 x64, but completely applies to any OS.  Just change the patches to the correct version for that OS, etc.

Credit for most of this goes to Michael Nystrom (AKA DeploymentBunny).

Some considerations before we start.  Just use any old desktop/laptop/etc.  Requirements are it has windows on it with the HyperV role feature installed (can be client or server version of windows, it doesn’t matter).  Currently, the scripts are all setup for a single host to do everything.  I’m working on making it a little more robust and handling farming out the HyperV creation to a server or a different box.  When I am done, I will post it up for everyone.

Things Needed:
Windows 10 ADK 1511:
MDT 2013 Update 2:
Windows WDK:
Image Factory Scripts:
Custom customsettings.ini and bootstrap.ini:!103876&authkey=!ADRjqAICrTX7IB8&ithint=file%2czip
c:\windows\system32\vmguest.iso from Server 2012 or Windows 8 system with Hyper-V role installed

Extra stuff to download if you are going to build/capture Windows 7/Server 2012/R2:
Clean up the installed updates (this applies to all OS’s, but since we only install 1 update for Win10, its not as crucial):
Update for high cpu/memory usage on WU scan in Win7:

Bonus for Win10:

Now we will start the actual process.

Install the ADK choosing Imaging Tools and WinPE.
Install the WDK.
Install MDT.

Open the MDT console, and go to Deployment Shares.  Right click and choose “New Deployment Share.” Run through the wizard and accept all the defaults.  If you want to change the share location or share name, go ahead. Everything in here will reference them as default, but its easy to change it for something else.

By default, only admins have access to the deployment share.  You can choose your security, but I suggest a regular user for the process.  Create a normal user on the machine.  Then right click on c:\deploymentshare (in windows explorer) and go to properties. Give your use Full Access to the directory. On the sharing tab, click advanced sharing, then permissions, then give your user full access there.  Click OK all the way out. I will give a small note here. If you started from an earlier version of MDT, create a new share.  In earlier versions, the deployment share was open to the world.  If you used credentials in your sequences (and you probably did), anyone could view them.

Now extract the Image Factory Scripts to the Deployment Share. The stuff under DeploymentShareFolder gets put at the root of the deployment share (ie c:\deploymentshare\Extra).  I like to put the ImageFactoryV2Scripts in the root also so I know where they are, but you can put them anywhere.  Go to c:\deploymentshare\extra\KVP and view the readme.txt.  You need to copy devcon.exe out of the WDK (C:\Program Files (x86)\Windows Kits\10\Tools\x86).  After you copy it out, you can uninstall the WDK if you want to save some space.  Get the other files from the vmguest.iso.  They will be in “support\x86\”

Open up c:\deploymentshare\extra\deploy\scripts and edit the LoadKVPinPE.vbs file.  On line 9, remove the /r from this line: oUtility.RunWithConsoleLogging “\KVP\devcon.exe /r install \KVP\wvmic.inf vmbus\{242ff919-07db-4180-9c2e-b86cb68c8c55}”  It should end up as oUtility.RunWithConsoleLogging “\KVP\devcon.exe install \KVP\wvmic.inf vmbus\{242ff919-07db-4180-9c2e-b86cb68c8c55}”  Save the file.


Open up c:\deploymentshare\control.  Copy in the bootstrap.ini and customsettings.ini files.  Edit them to fit your environment (username/password of the standard user you created, image name if you want to change it, share location, etc).

Now go back to the MDT console.  Right click on your deployment share and choose properties.  Go to the Windows PE tab, and next to “Extra directory to add” enter C:\DeploymentShare\Extra. Click OK.


  • Add your applications. This would be the WU cleanup script, Office 2016, etc.
  • Add your operating system/s.
  • Under Packages, create a new folder called Windows 7 x64 and add the cleanup tool hotfix and the high memory usage hotfixes under that.  Remember that when you import your packages, it will copy them to the correct place in your deployment share, so dont try to be too sneaky and put them there in advance.
  • Expand Advanced Configuration, then go to Selection Profiles. Create a new selection profile, we will name it Windows 7 x64, then choose the Windows 7 x64 package folder . Click next until done.

Almost done!  Right click on your deployment share in MDT and choose “Update Deployment Share.”  Select “completely regenerate the boot images,” and go through.  It will take a while to build your WinPE environment, then create and ISO.

When that is finished, go to Task Sequences, and create a new folder called REF. Inside that folder, create a new task sequence.  Give it a task sequence id that is semi-meaningful.  For example, WIN7BUILD.  This value will be the VM name and the image name if you kept the defaults. Task sequence name should be meaningful to you. Click Next.  Standard client task sequence. Next all the way through.  If you set the admin password in customsettings.ini, you can say that on that step.

Open the task sequence:

  • Under PreInstall, go to the Apply Patches step. Choose Windows 7 x64.
  • Under StateRestore, go to the Windows Update (Pre-Application Installation) step. On the options tab, uncheck the disable box.
  • With the previous step still selected, go to Add -> General ->Restart Computer
  • Choose the Install Applications step right after the restart computer you just added.  Choose install a single application, then browse to your office install.
  • Select the Windows Update (Post-Application Installation) step.  On the options tab, uncheck the disable box.
  • With the previous step still selected, go to Add -> General ->Restart Computer.
  • Right click on Windows Update (Post-Application Installation) and choose copy. Right click on the previously added restart computer and choose paste.
  • Add another reboot after the last windows update step.
  • Click on Apply Local GPO package.  Click on Add -> New Group. Select the new folder created and rename it Cleanup Before Sysprep.
  • Go to Add-> General -> Restart Computer
  • Select the Restart Computer we just added, then Click Add -> General -> Install Application
  • Select the Install Application step, choose single application and then select the cleanup windows application.
  • Go to Add -> General -> Restart Computer


We are done with the setup! Go into c:\deploymentshare\imagefactoryv2scripts (or wherever you put those files) and edit the xml file to fit your setup.  The network switch name comes directly from the hyperv switch name.

Open up a powershell prompt, cd to c:\deploymentshare\imagefactoryv2scripts, then run import-module ImageFactoryV2.psm1.  Next we will run New-AutoGenRefImages.  This will start the build and capture process!  It will take a LONG time to run with Win7 as there are a ton of updates.  The last time I ran it, it took 16 hours to finish on Win7x64 and Office 2016.  After it finishes, you can then enter Remove-AutoGenRefImages and it will delete the VM and VM files.

To take this to the next level, create a scheduled task that runs these commands.  If you wanted, you could also have them auto-import the image you capture into SCCM.

Filed under: Uncategorized | Posted on March 28th, 2016 by CharlieMaurice | No Comments »

Applications missing in Software Center

Recently we began seeing  a problem where applications were deployed to a machine, but they were not displaying in software center.  Firing up the ConfigMgr toolkit, I saw the applications as applicable, but they weren’t being evaluated. If I created a new deployment with the exact same settings, then it would work just fine.  After lots and lots of digging and searching, I finally stumbled upon a solution here:

If you want to see if this is your problem, run the powershell script on an affected client. If it returns a non-zero value, this is your problem.  By creating a CI, you are able to find all clients with the issue. The script does require at least powershell 3.0 (but if you are deploying something, you really should be pushing WMF4.0, and here is how to install .net framework 4.5 before you push WMF4:  After creating a baseline with an auto-remediate, I was able to fix all the computers in our environment.

Filed under: Microsoft, SCCM | Posted on January 27th, 2016 by CharlieMaurice | No Comments »

Cleaning up the SXS folder during Build and Capture TS

MVP Mikael Nystrom posted a great article on how to clean up the SXS folder during your Build and Capture Task Sequence.  This works for Win 7 SP1, Win 8/8.1, Server 2008 R2 and Server 2012/R2.

Find the article here:  Nice to Know – Get rid of all junk before Sysprep and Capture when creating a reference image in MDT 

Filed under: Uncategorized | Posted on June 12th, 2014 by CharlieMaurice | No Comments »

SCCM PXE Boot Fails 0xc0000001

I was banging my head against a wall for a bit with some new machines (Dell Optiplex 3020’s) we got in that wouldn’t PXE boot.  They would start, then error out with 0xc0000001, “A required device isn’t connected or can’t be accessed”.  I knew I had the right drivers installed in my pxe boot media, and had no idea why it wouldn’t work.  After a lot of searching, I found out why.  Most people change a registry key to make PXE booting faster.  This key controls the packet size for TFTP transfers.  I never had a problem with Intel nics, but these were the first computers with Realtek nics, and it was a problem.  After adjusting the size of the registry key back down, I was able to get them going again.  Here is the registry key:


Most people say set that to 16384 (dec) in order to have the fastest PXE boot time (which I had done).  That’s why I was having problems.  I had to lower it to 4096 in order for it to work for me.

Hopefully if someone else runs across the 0xc0000001 error, this will help them.  There isn’t much info out there on it.

Filed under: Microsoft, SCCM | Posted on June 5th, 2014 by CharlieMaurice | 3 Comments »

Gleanings from Johan at TechEd 2014

I attended a few of the OSD deployment sessions with Johan Arwidmark at both the main TechEd conference, and also at TechEd Day 5 put on my HASMUG.   Here are some of the things I thought were good to remember.  I appologize for the text wall, but its all good info.


SMSTS.log has a 1mb size, and 1 rollover file by default.  The problem is there is usually more like 4 or 5mb of data written to this log during a deployment.  To change this behavior, you will need to create a text file called SMSTS.ini with the following suggested values:


That will give us a max log size of 10mb instead of 1mb.  You can change the logging levels or add other values if you need them.  Next you will have to inject them into your boot image.  Jump down to the section called “Injecting the SMSTS.ini file into the Boot Image” on this page:


Next thing was that installing DaRT from the MDOP is well worth it.  This allows you to remotely see what is happening in your TS while you are deploying without sitting at the machine.  Unfortunately, this only applies while running the boot image, but its still a nice tool to have in your pocket.  The bonus is that when you turn on monitoring in the MDT console, you can now see what is happening in your OSD TS’s.  Johan has some instructions on his website:


For OSD troubleshooting, finding the right log sometimes is a pain.  Here is a refresher on the logs in the order they are used.
1. x:\windows\temp\smstslog
2. c:\_smstasksequence\logs
3. c:\windows\ccm\logs


CMTrace is actually included in the MDT boot image.  Its located at x:\sms\bin\x64


In your boot image properties, make sure you enable it for PXE (it isn’t enabled by default, and a lot of times you just forget).


This one surprised me.  But offline servicing of your install images isn’t recommended.  Things like frameworks cant be injected, but sometimes try to anyways.  That breaks your image.  Its better to just have a build and capture and rerun it as needed.


Inside the ztisccm.wsf file, you will find this line: wscript.sleep 30000.  This was in there for a specific hardware model from like 6 years ago (confirmed by the guy who actually wrote the script).  It is safe to change the value down to like 5000.


How many times have you been trying to fix a problem during your TS and missed the 15 minute reboot on failure window?  Here is how to fix it:  At the beginning of your TS, add a new Task Sequence Variable.  Set the variable to be SMSTSErrorDialogTimeout and the value to be 86400.  This will give you 24 hours to come back and figure out what is going on before it reboots.


There was a bunch more, but those were the top ones for me.  I wouldnt hesitate to watch his sessions.  Also, make sure you check out his website:

Filed under: Microsoft, SCCM | Posted on May 21st, 2014 by CharlieMaurice | No Comments »

TechEd 2014 Week in Review

Most of the sessions I went to were decent.  I think there was only 1 or 2 time slots where I couldn’t find anything at all, and used that time to do hands on labs.  Here are what the highlights of my week were.

The first session I was extremely impressed with was the session on SQL for the non DBA.  Being as nearly every Microsoft product uses SQL, you would think that most of us techies would understand it better.  But call it good design, SQL is usually pretty rock solid, and something you really never think about.  This class gave a really good overview of the basics of SQL, and some basic considerations for a better performing instance.  If I had to choose the top piece of info, it would be learning about Ola Hallengren’s super popular SQL maintenance scripts ( ).  I highly recommend watching this session when its available.

The next good session I went to was on Group Policy ( ).  Couple of tidbits were that disabling an un-used section (either computer or user policy) does nearly nothing for speed.  The second thing was something I had noticed, but never looked into.  On windows 8, by default, login scripts are delayed and don’t run until 5 minutes after you log in.  This is so that your computer has more time to get started before they run.  This amount of time can be changed, I just didn’t even know it was there.

The SCCM community tools session was fantastic also ( ).  They basically talked about all the things available from community members that will make your job easier.  Learning about the PowerShell App Deployment Toolkit ( ) was a game changer.  I had been experimenting with a similar tool from Coretech, but this tool was even better.  It allows you to force a program to close before the Application Deployment Starts.  This is really good for doing things like updating Java which require your browser to be closed.  I highly recommend checking it out.

My top mentions in the community tools session were Ola Hallengren’s SQL maintenance scripts (same as above),   the automated documentation tool ( ), and the Configuration Manager Support Center (from Microsoft ).  It well worth watching this session.

The Johan Arwidmark sessions are always filled with incredible knowledge, and this year was no different. I went to a couple of his during TechEd, and then also to the one he had on TechEd Day 5, put on by the Houston Microsoft User Group (HASMUG).  Unfortunately for everyone who wasn’t there, the one on Day 5 was the best.  They tried to record it, but I don’t think it worked.  It kind of rolled up all the best tricks from his sessions all week long in one place.  I will probably be doing a separate blog post later on just some of the things he talked about.

My overall thoughts on the conference are that it definitely was no MMS, but for me personally, it was still worthwhile.  I met a ton of new people, and really, talking to those people is sometimes better than sessions.  You can find out what works and what doesn’t in the real world, and what to be careful about.  The biggest bummer was there weren’t a ton of MMS style deep dives that were always popular even though they were promised to us.  Fortunately for me, I do more than just SCCM.  For those people who do SCCM for their entire job, I feel bad for them as they didn’t have many choices.  Hopefully that is fixed for next year.  I talked to someone who helped pick which sessions were approved, and he claimed that all the SCCM ones submitted had poorly written abstracts, and that is why there weren’t chosen.  They may very well be true, but then it would have been nice for them to go back and engage with those people and ask for better work instead of just dismissing them.  Instead, they made sure that past MMS attendees were going to write off TechEd for next year (if they hadn’t already) and become even more bitter.  If they truly wanted to integrate, then some more attention could have been paid.  Some people feel they weren’t chosen to speak because there wasn’t enough drinking of the Kool-Aid in them.  I’m sure as always, the reality is somewhere in the middle.

The second issue was location.  The convention center was very small, and some sessions were full so you couldn’t see them live.  It also was like 30 minutes from the airport, in a portion of town with nothing else to do.  Not a very good way to highlight a location.  Luckily tons of vendors stepped up and had after conference parties for something to do.  Thanks to them for all the good times.

One of the best parts of MMS/TechEd is always catching up with all your friends from previous years (and of course meeting new people also).  This year was no different.  Unfortunately, only 2 people from that group (besides me) returned because of the combining of events.  We still managed to have a good time, and help a few other people have good times too.  So thank you to all those people, without you, MMS/TechEd wouldn’t be the same!

Filed under: Uncategorized | Posted on May 19th, 2014 by CharlieMaurice | No Comments »

“Windows could not configure one or more system components” error during unattended phase OSD


Ran into a new error today, and took some time to figure out the solution.  We had a new Dell come in with a SSD (unknown at the time that it had an SSD, nobody mentioned it to me) that was getting an error during the unattended phase of windows setup.  The full error was:

Windows could not configure one or more system components.  To install Windows, restart the computer and then restart the installation.

Since I had just installed the newest Dell driver cab into our SCCM environment, I assumed it had to do something with that.  But after finally disabling all drivers, it still happened.  I decided to pull the HDD to look at the log files, and thats when I found out it was an SSD. It was the first SSD in our environment, so we hadnt run into the issue before this.  I attached the drive to another computer, and pulled the log files from c:\windows\Panther.  Inside of setupact.log, I found this error:

Error                 CBS    Startup: Failed to process advanced operation queue, startupPhase: 0.  Primitives are still pending. [HRESULT = 0x80004005 - E_FAIL]

Doing some searching, I found a Dell webpage ( ) which mentioned that  the Kernel-Mode Driver Framework was out of date in our base image, and an update needed to be injected to fix it.

So I downloaded the hotfix, extracted it, then injected it into our wim.  Here are the commands to do it (this is for the x64 version, if you need x86, replace the x64 in the filename with x86):

#Lets make the directory to mount our wim to:
mkdir C:\Mount

#Now lets mount the image located at index 1
dism.exe /mount-wim /wimfile:”WIM file location” /index:1 /mountdir:C:\Mount

#Lets inject the hotfix into the image, substitue the package path to your actual path to the cab
dism /Image:C:\Mount /Add-Package /PackagePath:C:\Temp\kmdf-1.11-Win-6.1-x64\

#Finally, lets unmount the image and save the changes
dism.exe /unmount-wim /mountdir:F:\Temp\Mount /commit

Refresh the installation image in SCCM, and try it again.  That fixed the issues for us.

Filed under: Microsoft, SCCM | Posted on April 23rd, 2014 by CharlieMaurice | No Comments »

Restart a service in SCOM

Recently Ive been playing with SCOM.  I have a service that keeps dying on my terminal servers, and I wanted it to restart automatically.  It took a lot of searching to come up with the solution, but it works perfectly!  Now it restarts the service and sends me an email so I know it happened.  Here is the blog post on how to do it:

Filed under: Microsoft, SCOM | Posted on January 13th, 2014 by CharlieMaurice | No Comments »

RSS Feed





Copyright © 2023 Charlie's Tech Ramblings. All rights reserved.

Tech Blue designed by Hive Designs • Ported by Free WordPress Themes