[{"content":"I recently presented at PSDayUK 2023 in London. Below you will see the session recording.\nThe presentation covered:\nWhat is a pipeline? What is CI/CD? Why should I care? Adopting a pipeline as part of your PowerShell development process will liberate you. It will free you from the burdens of repetitively testing and manually deploying your code with each change. You will learn to embrace an automated mindset of automating your code delivery processes so you can focus on fixing bugs and adding new functionality / features. ","date":"2023-03-15T00:00:00Z","image":"https://adamcook.io/p/introduction-to-cicd-pipelines/images/cover_hu_a1a3d4284da65c17.jpg","permalink":"https://adamcook.io/p/introduction-to-cicd-pipelines/","title":"Introduction to CI/CD Pipelines"},{"content":" Do you have servers in your environment that require special or manual patching? Do you have non-redundant application servers that must have a co-ordinated patching routine? Wish you used ConfigMgr for these but haven\u0026rsquo;t figured out a way? Do you have spaghetti or unfinished code that attempts to orchestrate complicated patching cycles? By \u0026ldquo;special patching,\u0026rdquo; I mean the customer or application owners are terrified of the word \u0026ldquo;patching\u0026rdquo;, and they demand it be performed manually on a handful of finicky application servers out of fear of downtime. (yeah, because hitting \u0026lsquo;check for updates\u0026rsquo; manually reduces risk \u0026#x1f612; )\nThis leaves you feeling frustrated in having to perform tedious manual labour. Login, point, click, sit, wait etc.. You\u0026rsquo;re also annoyed the business has invested in a behemoth like ConfigMgr, and aren\u0026rsquo;t using it to its full potential just for these handful of special \u0026ldquo;snowflake\u0026rdquo; servers.\nThis is where I tell you I wrote a PowerShell module that automates this! Which is funny because you apparently shouldn\u0026rsquo;t automate it! You must do it manually! \u0026#x1f606; but hang on let me explain..\nQuick shout out to Cody Mathis who helped me soundboard a fair bit and provide some support on this \u0026#x1f44d;\nPSCMSnowflakePatching PSCMSnowflakePatching is a PowerShell module which contains a few functions but the primary interest is in Invoke-CMSnowflakePatching.\nIn this post I want to show you why I think this function will help you patch these snowflake systems without needing to sacrifice the use of ConfigMgr. It can streamline your manual labour, or your existing automation process, or hopefully inspire you to automate a complex patching routine with it.\nUsing Invoke-CMSnowflakePatching, you can either:\na) give it one or more hostnames via the -ComputerName parameter b) give it a ConfigMgr device collection ID via the -CollectionId parameter c) use the -ChooseCollection parameter and select a device collection from your environment For each host, it will remotely start the software update installation for all updates deployed to it.\nBy default it doesn\u0026rsquo;t reboot or make any retry attempts, but there are parameters for this if you need it:\n-AllowReboot switch will reboot the system(s) if any updates require a restart -Attempts parameter will let you indicate the maximum number of retries you would like the function to install updates if there was a failure in the previous attempt All hosts are processed in parallel, and you will get a live progress update from each host as it has finished patching with a break down of what updates were installed, success or failure.\nIf you use the either the -ComputerName or -CollectionId parameters, an output object is returned at the end with the result of patching for each host. This is great because if you want to orchestrate a complex patching routine with tools such as System Center Orchestrator or Azure Automation, you can absolutely do this with the feedback from the function.\nIn the above screenshot, you can see we\u0026rsquo;re calling Invoke-CMSnowflakePatching and giving it a ConfigMgr device collection ID. We\u0026rsquo;re capturing the output of the command by assigning it to the $result variable.\nIt\u0026rsquo;s worth calling out the two time columns seen on all lines: the first column is the time when that line was written, and the second column is the elapsed time since the start of execution.\nBeyond the first few lines revolving around startup, you can see essential information as the jobs finish patching each host. We can see the list of updates that successfully installed, and a yellow text warning indicating one of the updates put the system in a pending reboot state.\nIf you were running this ad-hoc and interactively, I suspect you might find this realtime information helpful. This output likely can\u0026rsquo;t easily be captured by most automation tools as it\u0026rsquo;s just Write-Host. However, this is why an output object is returned instead (see below) - this output can be captured and used however you please.\nLooking at the above screenshot, this is where more automation possibilities become available for you.\nThe function returned a summary of the patch jobs for each host as output objects, and we captured this in the $result variable from the last screenshot. Here we can see valuable information that might help feed as input to other automation things.\nHere is another example, if I ran the following command:\nPS C:\\\u0026gt; $result = Invoke-CMSnowflakePatching -ComputerName \u0026#39;VEEAM\u0026#39; -AllowReboot -Attempts 3 This time we\u0026rsquo;re just targetting the one server, permitting the server to reboot if an update returned a hard/soft pending reboot, and allow a maximum of 3 retries if there were any installation failures.\nAt the end of the process, you will receive an output object similar to the below in the $result variable:\nPS C:\\\u0026gt; $result ComputerName : VEEAM Result : Failure Updates : {@{Name=7-Zip 22.01 (MSI-x64); ArticleID=PMPC-2022-07-18; EvaluationState=Error; ErrorCode=2147944003}, @{Name=2022-08 Security Update for Microsoft server operating system version 21H2 for x64-based Systems (KB5012170); ArticleID=5012170; EvaluationState=InstallComplete; ErrorCode=0}, @{Name=2022-08 Cumulative Update for .NET Framework 3.5 and 4.8 for Microsoft server operating system version 21H2 for x64 (KB5015733); ArticleID=5015733; EvaluationState=InstallComplete; ErrorCode=0}, @{Name=Microsoft Edge 105.0.1343.33 (x64); ArticleID=PMPC-2022-09-09; EvaluationState=InstallComplete; ErrorCode=0}} IsPendingReboot : False NumberOfReboots : 1 NumberOfAttempts : 3 RunspaceId : af37488e-dad9-4d56-b72a-5aa642e589e4 From the output above you can see the overall result from patching my Veeam server; the list of updates that were installed, whether there is a pending reboot, how many times the server was rebooted during patching, and how many times it retried.\nWe can see NumberOfRetries is 3, and that 7-Zip 22.01 (MSI-x64) finished in a state of Error - it failed to install the update despite 3 attempts.\nIt looks like it installed all updates, and likely one of the Windows updates returned a pending reboot so the server rebooted. As the 7-Zip update failed in the previous attempt, it was retried. It retried for a maximum of 3 attempts before returning.\nHere is the full content of the Updates property from the output object in list view:\nPS C:\\\u0026gt; $result.Updates | fl Name : 7-Zip 22.01 (MSI-x64) - (Republished on 2022-09-15 at 17:52) ArticleID : PMPC-2022-07-18 EvaluationState : Error ErrorCode : 2147944003 Name : 2022-08 Security Update for Microsoft server operating system version 21H2 for x64-based Systems (KB5012170) ArticleID : KB5012170 EvaluationState : InstallComplete ErrorCode : 0 Name : 2022-08 Cumulative Update for .NET Framework 3.5 and 4.8 for Microsoft server operating system version 21H2 for x64 (KB5015733) ArticleID : KB5015733 EvaluationState : InstallComplete ErrorCode : 0 Name : Microsoft Edge 105.0.1343.33 (x64) ArticleID : PMPC-2022-09-09 EvaluationState : InstallComplete ErrorCode : 0 With this output, you have a lot of opportunities. For example, you could feed this output to other scripts to:\nSend an email as a report with the result (see this example here) Invoke some other custom remedial actions on the server or application it hosts Send the data to a ticketing system using some web API, or raise an alert somewhere Another benefit here is that you are using what you\u0026rsquo;ve paid for - ConfigMgr! With this approach, there will be no need to abandon usual deployment, content delivery, and reporting capabilities. The function is installing the software updates deployed to the client by ConfigMgr.\nHere are a couple more example screenshots:\nInteractively use PSCMSnowflakePatching with the -ChooseCollection Parameter The -ChooseCollection is the only parameter that you can\u0026rsquo;t use in an automated fashion. This is because it produces a pop-up Out-GridView window prompting you to choose a ConfigMgr device collection.\nThis is just demonstrating that you can still use Invoke-CMSnowflakePatching to \u0026lsquo;manually\u0026rsquo; patch systems if you wish to, just without the hassle of login, point, click, etc.\nWhy a module? Why not just a script? This could just be my subconscious self playing games in my mind, but I feel the need to justify why this ended up being a module and not a single script file. An irrational pressure builds in my mind when I publish code online - \u0026ldquo;it must be perfect and justified!\u0026rdquo;\nJobs are used to process multiple hosts at once for patching, and I found I was re-using code to account for things like loop waiting, timeouts, and retries. You can\u0026rsquo;t easily use nor pass functions to jobs, so instead, for better management and readability, I opted for a module and for each job, I leverage the module for reusing code.\nWhy not use a task sequence? I guess there could be an argument made to use a task sequence if you needed to orchestrate complex patching routines. I wouldn\u0026rsquo;t disagree with the idea, whatever floats your boat. For me personally, code offers more flexibility.\nFor instance, in a task sequence I don\u0026rsquo;t think it\u0026rsquo;s trivial to perform actions in a loop on a timer until it eventually succeeds. This sort of logic is littered within Invoke-CMSnowflakePatching, with thanks to Cody Mathis and New-LoopAction \u0026#x2764;\u0026#xfe0f;\nWhy? Andrew Porter pinged me asking if I would help making one. He found a few scripts online here and there which collectively sort of did what he wanted but not quite.\nThese are the scripts which he found:\nOnly installs updates, doesn’t reboot or cycle through https://timmyit.com/2016/08/01/sccm-and-powershell-force-install-of-software-updates-thats-available-on-client-through-wmi/\nTriggers up0dates remotely and reboots but doesn’t cycle through again or notify results https://eskonr.com/2021/05/using-scripts-to-trigger-software-updates-remotely-from-the-sccm-console/\nInstall updates, reboot and loops to install further updates – not completely working https://docs.microsoft.com/en-us/answers/questions/642405/powershell-loop-install-of-available-software-upda.html\nInstalls updates on all servers in a collection, reboots and loops to install further updates https://www.itreliable.com/wp/powershell-script-to-install-software-updates-deployed-by-sccm-and-reboot-the-computer-remotely/\nMethod 4 seem ideal as its code was more or less complete. I personally wasn\u0026rsquo;t a fan of the style of code, so I decided to rewrite it.\n","date":"2022-09-18T10:22:56+01:00","image":"https://adamcook.io/p/patching-snowflakes-with-configmgr-and-powershell/images/cover_hu_33de911aa242954e.jpg","permalink":"https://adamcook.io/p/patching-snowflakes-with-configmgr-and-powershell/","title":"Patching Snowflakes with ConfigMgr and PowerShell"},{"content":"I recently set up a new lab at home and was installing Remote Desktop Gateway on Windows Server 2022.\nWhile setting it up, and also configuring RAS as a virtual router, I was very confused as to why I kept getting moaned at while attempting to RDP to a system using the gateway:\nI was absolutely confident everything was configured correctly:\nDNS, routing, and firewall was fine My RAP and CAP policies in RD Gateway Manager also had the correct things set: the user account I was connected with was in the correct groups, and so were the systems I was trying to connect to. I had password authentication enabled, and not smartcard I spent hours scouring \u0026ldquo;the Google\u0026rdquo; for ideas and discussions etc. All \u0026ldquo;answers\u0026rdquo; revolved around the simple misconfig of missing user/computer objects in groups of the RAP/CAP stuff.\nThe event viewer log for TerminalServices-Gateway was leading me up the garden path:\nThe user \u0026ldquo;CODAAMOK\\acc\u0026rdquo;, on client computer \u0026ldquo;192.168.0.50\u0026rdquo;, did not meet connection authorization policy requirements and was therefore not authorized to access the RD Gateway server. The authentication method used was: \u0026ldquo;NTLM\u0026rdquo; and connection protocol used: \u0026ldquo;HTTP\u0026rdquo;. The following error occurred: \u0026ldquo;23003\u0026rdquo;.\nLong story short, I noticed this snippet in the System event viewer log which definitely was not useless:\nNPS cannot log accounting information in the primary data store (C:\\Windows\\system32\\LogFiles\\IN2201.log). Due to this logging failure, NPS will discard all connection requests. Error information: 22.\nThis little nugget left me to finding the Network Policy Server snap-in (my RD Gateway is configured to use the local NPS service, which is the default). At this point I didn\u0026rsquo;t care for why it couldn\u0026rsquo;t log, I just wanted to use the gateway. Under Accounting, select Change Log File Properties and you can bypass the option to abort connection if failed to log:\nAfter making this change, I could use my new shiny RD Gateway!\nThis might not be the solution for you, perhaps your issue is simply DNS/routing/firewall, or maybe you haven\u0026rsquo;t correctly added your user account or server/computer you\u0026rsquo;re trying to access to your RAP/CAP config. However, if you were like me, and had everything setup correctly, except this oddity, then I hope this workaround is suitable for you.\n","date":"2022-01-13T12:57:21Z","image":"https://adamcook.io/p/remote-desktop-gateway-woes-and-nps-logging/images/cover_hu_9d197a3bc8ecf9b0.jpg","permalink":"https://adamcook.io/p/remote-desktop-gateway-woes-and-nps-logging/","title":"Remote Desktop Gateway Woes and NPS Logging"},{"content":"It has been a while since I\u0026rsquo;ve written a post. I\u0026rsquo;ve been pretty committed to a new job I started at the start of the year. I need to find more positive outlets and return to doing the things I enjoy in my down-time. I hope to do more tinkering on side projects and giving back to the community.\nThe cover image for this post if for our pup, Alfie! He\u0026rsquo;s a beagle and we will be bringing him home in a couple of weeks.\nIntroduction In this post I want to show you how PowerShell and C# stack up against each other with their cold start times in Azure Functions.\nA quick recap of Azure Functions if you\u0026rsquo;re unsure:\nServerless computing (\u0026ldquo;less servers\u0026rdquo; © Jeff Hollan) Scalable cloud resources where you can execute given code to perform a task on a schedule or trigger, e.g.: A website backend API that interacts with internal resources or third party APIs Trigger a script to run when a new VM is created in Azure Send an email or Teams notification with a given message fed via HTTP POST request If you want to learn with more examples, check out these resources: Build serverless APIs with Azure Functions | Azure Friday Introduction to Azure Functions Intro to Azure Functions - What they are and how to create and deploy them PowerShell \u0026amp; Azure Functions - Part 1 What is a cold start? A cold start is the process of a function starting up and completing the task. The painful fact here is that if your function is left idle for some period of time, e.g. 15 or 20 minutes, the process of the function is stopped until it is called again. The latency the user experiences calling your function again after it has stopped is high. If the user calls your function again within the idle period, before it is stopped, the latency is low because the function is still running.\nYou can mitigate cold starts in Azure with Premium Plan features and a App Service plan.\nThe test For me to show you how PowerShell and C# perform against each other in Azure Function Apps, I have two HTTP-trigger functions: GraphAPI-Mail-PS and GraphAPI-Mail-CSharp.\nWhen I send a simple HTTP request to either one, they send an email to me, from myself, using Microsoft\u0026rsquo;s Graph API, e.g.:\n# Calls the C# Function App Invoke-RestMethod -Uri \u0026#34;https://graphapi-mail-csharp.azurewebsites.net/api/GraphAPI_Mail\u0026#34; # Calls the PowerShell Function App Invoke-RestMethod -Uri \u0026#34;https://graphapi-mail-ps.azurewebsites.net/api/GraphAPI-Mail\u0026#34; I intend to run the above, a hundred times for each while recycling the Function App in-between each time to force a cold start, and see what the numbers are. This is what I intend to use to carry out the test: Compare-PSCSharpGraphAPIMail.ps1.\nThe results Function App Average (s) Minimum (s) Maximum (s) GraphAPI-Mail-CSharp 1.564564157 0.418345 3.5649865 GraphAPI-Mail-PS 18.533579388 13.5226983 47.0977745 Discussion If you\u0026rsquo;ve been on the fence about trying C# and you consider yourself handy with PowerShell, hopefully this post has given you some motivation to at least give C# some serious thought.\nI have to be honest though, writing code with C# is not as easy as PowerShell. I guess this applies to all new things but I felt a fair bit of frustration while learning C# and trying to apply it. Thankfully I work with some great people that helped me, mainly Cody and a bit of Ben too!\nI immediately sense that there is a lot of \u0026lsquo;scaffolding\u0026rsquo; involved to write in C# compared to PowerShell, however doing this test has given me a good insight and understanding. For an example of an Azure Function App, C# fits in well because it can ran what you want significantly quicker so if you have anything that involve some user experience, it is for sure superior.\n","date":"2021-08-07T18:29:18+01:00","image":"https://adamcook.io/p/powershell-vs.-csharp-azure-functions-cold-starts/images/cover_hu_8115f2c6a943dd1e.jpg","permalink":"https://adamcook.io/p/powershell-vs.-csharp-azure-functions-cold-starts/","title":"PowerShell vs. CSharp Azure Functions Cold Starts"},{"content":"Shlink is an open-source self-hosted and PHP-based URL shortener application. I wrote PSShlink which will help you manage all of your short codes using PowerShell!\nShlink helped me tremendously when I moved my domain and blog CMS from cookadam.co.uk to adamcook.io. Some of my posts rank modestly in search results for some keywords and I felt unhappy about letting that go. I also did not want people\u0026rsquo;s bookmarks for posts using my old domain to result in an incomplete redirect to its new URL.\nGoogle also help you secure your position in search results if you tell them your new domain. There were a couple of prerequisites to make this happen and one of them was to ensure all posts (discovered by my sitemap.xml) provided a 301 to a valid URL, usign my new domain. This is where Shlink stepped in.\nAnyway, I wholeheartedly consumed Shlink which enabled me to migrate my blog to a new domain and I loved it. I moved from WordPress to using Hugo (see Deploying Hugo Websites in Azure for Pennies or Free on GitHub Pages). Thankfully I didn\u0026rsquo;t have too many posts to carry over. However there was enough to make me want to use a tool to convert WordPress post XML to Markdown: lonekorean/wordpress-export-to-markdown.\nThere was also enough to make my hands hurt with a lot of careful copying and pasting trying to create all the short links in Shlink for both domains cookadam.co.uk and www.cookadam.co.uk. I looked to see if there was a Shlink module in PowerShell gallery, and there wasn\u0026rsquo;t. Like many other PowerShell enthusiasts, I jumped on the opportunity and wrote PSShlink.\nInstalling PSShlink Install the module from the PowerShell Gallery:\nInstall-Module PSShlink -Scope CurrentUser Import-Module PSShlink Check out all of the available functions:\nGet-Command -Module PSShlink Authentication Each function will have two parameters for authentication to your Shlink instance:\n-ShlinkServer: a string value of the Shlink server address e.g. https://example.com -ShlinkApiKey: a SecureString value for the Shlink server\u0026rsquo;s API key\nAfter using any function of PSShlink for the first time after importing the module - which have both parameters -ShlinkServer and -ShlinkApiKey * - it is not necessary to use the parameters again in subsequent uses for other functions of PSShlink. These values are held in memory for as long as the PowerShell session exists.\nSome functions do not require both -ShlinkServer and -ShlinkApiKey, e.g. Get-ShlinkServer. Therefore if the first function you use after importing PSShlink accepts only -ShlinkServer, you will not be asked again for this value by other functions of PSShlink. You will however be prompted for the API key. Again, subsequent uses of other functions will no longer require -ShlinkServer and -ShlinkApiKey. If the first function you use after importing PSShlink requires -ShlinkServer and/or -ShlinkApiKey and you have not passed the parameter(s), you will be prompted for them.\nUsing PSShlink As previously mentioned, -ShlinkApiKey only accepts a SecureString value for simplicity we\u0026rsquo;ll create a SecureString value now for our API key.\n$ApiKey = \u0026#34;ba6c52ed-flk5-4e84-9fc7-9c7e34825da0\u0026#34; | ConvertTo-SecureString -AsPlainText -Force Using the newly created SecureString API key for our Shlink instance, let\u0026rsquo;s query all of our short codes:\nGet-ShlinkUrl -ShlinkServer \u0026#34;https://myshlinkserver.com\u0026#34; -ShlinkApiKey $ApiKey The great thing about PSShlink is that the Shlink server name and API key are held in memory for as long as the PowerShell lives. They\u0026rsquo;re not accessible as variables to use outside of the \u0026ldquo;module\u0026rsquo;s scope\u0026rdquo; though.\nThis means you do not have to repeatedly supply the -ShlinkServer or -ShlinkApiKey parameters for subsequent function calls of PSShlink, so long as the first one was successful.\nFor example, let\u0026rsquo;s save QR code images for all of our short codes:\nGet-ShlinkUrl | Save-ShlinkUrlQrCode All files will be saved as png and 300x300 in size. All files by default are saved in your Downloads directory using the naming convention: ShlinkQRCode_\u0026lt;shortCode\u0026gt;_\u0026lt;domain\u0026gt;_\u0026lt;size\u0026gt;.\u0026lt;format\u0026gt;. e.g. ShlinkQRCode_asFk2_example-com_300.png. As promised, you can clearly see we did not supply the Server address or API keys and it still worked!\nGetting help All functions come with complete comment based help, so it is possible to find examples for each function using Get-Help. For example, use the following to see detailed help including examples for Save-ShlinkUrlQrCode:\nGet-Help Save-ShlinkUrlQrCode Failing that, feel free to raise an issue on the GitHub repo!\n","date":"2020-12-21T00:00:00Z","image":"https://adamcook.io/p/using-powershell-to-manage-shlink-with-psshlink/images/cover_hu_33f2e9de0fa310b1.jpg","permalink":"https://adamcook.io/p/using-powershell-to-manage-shlink-with-psshlink/","title":"Using PowerShell to Manage Shlink With PSShlink"},{"content":"In this post I will share with you how to install Inedo\u0026rsquo;s ProGet to host your own NuGet feed (effectively your own PowerShell Gallery). This will let you share PowerShell modules and scripts amongst other systems and colleagues from an internal resource, using cmdlets from the PowerShellGet module e.g. Install-Module, Install-Script, Find-Module, Find-Script etc.\nWhy might you want to do this? Remember when PowerShell Gallery went down for a while in October? That\u0026rsquo;s a pretty good reason. Another reason might be if you like or need the use of the PowerShellGet cmdlets when interacting with the PowerShell Gallery, but do not want to store your code publicly.\nBy hosting your own, you can be in full control of its availability (ProGet Enterprise offers high-availability and load balancing features). Perhaps instead maybe you could treat it as a secondary source in the event of PowerShell Gallery going down.\nWhat I wanted to do differently in this post, compared to others which I\u0026rsquo;ve seen cover installing ProGet, is that I wanted to add an authentication layer when trying to pull packages. In other words, I needed my NuGet feed to be widely open in terms of network access and because of that I wanted to protect it by requiring the -Credential parameter with cmdlets like Install-Module, Find-Module etc.\nHere is what I will be covering in this post:\nWhat is NuGet? ProGet Installing ProGet Create API key for publishing Create user for downloading modules or scripts Create the PowerShell feed Configure ProGet to use HTTPS Registering the feed on a system and publishing a test module Installing modules or scripts from the feed Basic authentication Endpoint dependency requirements Conclusion Additional resources What is NuGet? Here I\u0026rsquo;ll quickly breakdown why \u0026ldquo;NuGet\u0026rdquo; is a thing. It should give you an insight when trying to understand how or why it is relevant to PowerShell, especially in this context of trying to host our own PowerShell Gallery.\nNuGet is a package management protocol developed by Microsoft. It was intended for .NET packages on NuGet.org. Developers for .NET use NuGet to pull their project\u0026rsquo;s package dependencies from NuGet.org. This is very much like how PowerShell users use Install-Module from the PowerShell Gallery for their scripts, or dependencies for other PowerShell workflows.\nNuGet is the binary which is behind the scenes of the PowerShellGet module to make commands like Install-Module, Install-Script, Find-Module, Find-Script etc pull content from the PowerShell Gallery, or other NuGet feeds e.g. your self-hosted one with products like ProGet.\nIt makes sense that Microsoft leveraged the existing NuGet protocol for PowerShell\u0026rsquo;s package management (scripts and modules). Doing so means they did not have to reinvent the wheel by producing and maintaining another package management system.\nI found the below 5 video YouTube series very insightful as it explains how and why it is used for .NET developers. Don\u0026rsquo;t worry if you don\u0026rsquo;t know .NET, it\u0026rsquo;s still useful. All 5 videos will take about ~30 mins of your time. Here is the playlist link.\nProGet ProGet is a proprietary product of Inedo. You can request to view their source code if you want to.\nThey offer three support tiers of ProGet: free, basic and enterprise.\nIt\u0026rsquo;s worth mentioning that it can host a lot more than just NuGet feeds.\nI chose ProGet after fumbling around with NuGet.Server and expressing a little frustration about in the WinAdmins Discord. Brett Miller suggested to me he uses ProGet, and well, here we are. After much playing and reading, I found the free tier met my needs nicely. It\u0026rsquo;s absolutely worth pointing out there are many other alternatives available.\nInstalling ProGet Let\u0026rsquo;s jump straight in and start installing ProGet. If at any point you get stuck on installing ProGet, they do offer some docs of their own.\nHead over to the Downloads page and click Download Inedo Hub. This is its web installer and it also installs a separate Inedo Hub application where you can launch, reinstall, reconfigure or upgrade ProGet. It\u0026rsquo;s reasonably lightweight, so it\u0026rsquo;s not offensive by any means. It\u0026rsquo;s actually really helpful. They do offer an offline installer but its installation instructions are different, so bear that in mind.\nLaunch the installer. Once the downloading and prerequisites scanning is complete: Choosing Specify instance\u0026hellip; gave me the \u0026ldquo;localhost\u0026rdquo; option. On this VM I have SQL Server installed. If you don\u0026rsquo;t have SQL Server installed locally and do not want to use a remote database, choose \u0026ldquo;Install Inedo Instance\u0026rdquo; and it installs SQL Express instead for you.\nOptionally change the Database Name if you want to.\nI chose IIS as the web server, instead of the Integrated Web Server. You can later switch from the integrated web server to IIS if you following this doc. Although it seems they recommend IIS for load-balancing / HA config and also for configuring HTTPS. Later on in this doc we will be configuring the feed to use HTTPS with a web certificate so I recommend you do this too.\n\u0026#x26a0;\u0026#xfe0f; Some API endpoints for ProGet use HTTP methods PUT and DELETE. If WebDAV is installed on IIS, it is recommended to disable it. See Disabling WebDAV in IIS.\nClick Install to get rolling. You\u0026rsquo;ll be prompted to provide name and email address to sign up, even for a free license. You will then be emailed a license key.\nOnce install completes, I recommend either rebooting or just bouncing the ProGet Service\nClose and re-open the Inedo Hub and click Launch. You will be taken to the Web UI:\nAfter you\u0026rsquo;ve successfully installed ProGet and can see the web UI successfully load (default credentials are Admin/Admin, by the way), we need to lock it down a little bit. We will change the default credentials and remove the built-it Anonymous identity object access to all feeds.\nClick the settings cog at the top right to access the settings and choose Users \u0026amp; Tasks under Security \u0026amp; Authentication Click the Admin user and change the password to be something complex Go to the Tasks tab and remove Anonymous from the task Administer and View \u0026amp; Download Packages \u0026#x26a0;\u0026#xfe0f; After removing the Anonymous identity from the Administer task, you will likely be prompted to log in using your new Admin credential.\nCreate API key for publishing Let\u0026rsquo;s create an API key. This key will be needed for when we publish modules or scripts using either of the Publish-Module or Publish-Script cmdlets.\nGo back to the settings via the cog at the top right, under Integration \u0026amp; Extensibility select API Keys and click Create API Key Check the box which reads Grant access to Feed API and click Save API Key. The key will display in clear text in the UI. Make note of this as we will need it later on. Create user for downloading modules or scripts Now we will create a user which will be used for basic authentication in the -Credential parameter of cmdlets like Find-Module, Find-Script, Install-Module and Install-Script etc.\nAgain, work your way back to settings. This time click Users \u0026amp; Tasks and click Create User Set a username and password and click Save Change to the Tasks tab and we will give the newly created user access to only View \u0026amp; Download Packages by clicking on Add Restriction Enter the user name created in step 2 in the Principles field and enter View \u0026amp; Download Packages in the Tasks field - don\u0026rsquo;t forget to click Save Privilege Create the PowerShell feed Good stuff, next we will create a new PowerShell feed which will contain our published modules and scripts.\nNavigate to Feeds from the top navigation bar and click Create New Feed Scroll down and choose PowerShell Modules as the feed type Give your feed a name and select the radio button which reads Private/Internal packages, click Create New Feed Configure ProGet to use HTTPS Now we will configure ProGet to listen on an additional alternative port with a web certificate. This is optional for you; perhaps you\u0026rsquo;re running this internally and you are OK with non-encrypted traffic. However in any case I still think it\u0026rsquo;s a good idea to use TLS.\nYou will need a web certificate which we will configure in IIS to use. As this is in my lab, things are like the wild west around here so I\u0026rsquo;m just going to yolo it with a LetsEncrypt certificate using the Posh-ACME module.\nGenerate or obtain web certificate. Here I\u0026rsquo;ll quickly demo how I do this with the Posh-ACME module. Install-Module \u0026#34;Posh-ACME\u0026#34; Import-Module \u0026#34;Posh-ACME\u0026#34; $Params = @{ Domain = \u0026#34;nuget.adamcook.io\u0026#34; Contact = \u0026#34;myemail@address.com\u0026#34; DnsPlugin = \u0026#34;Cloudflare\u0026#34; PluginArgs = @{ CFToken = ConvertTo-SecureString \u0026#34;MySuperSecretCloudflareAPIKey\u0026#34; -AsPlainText -Force } } New-PACertificate @Params Create a DNS record to point to your ProGet server\u0026rsquo;s IP Load the certificate in to the Local System\u0026rsquo;s Personal or Web Hosting certificate store. If you used Posh-ACME, you will find the .pfx in %LOCALAPPDATA%\\Posh-ACME. Simply right click, choose Install PFX and complete the wizard. The default password protecting the certificate is poshacme.\nOpen IIS and navigate through: Sites \u0026gt; ProGet \u0026gt; Bindings \u0026gt; Add Change Type to https By default the port is 443. If you have another site or service bound to this port on the host you\u0026rsquo;re currently configuring, choose a different port. For this demo I chose port 8625. In the drop down for SSL certificate you should see our certificate Click OK! Go to Settings of ProGet and under System Configuration choose Advanced Settings Scroll down to Web.BaseUrl and populate the field with your full root URL. For example, for me I have set it to https://nuget.adamcook.io:8625. This part is important if you are serving ProGet under a different path in the URL or port, e.g. I could have used https://adamcook.io:8625/nuget Click Save Changes At this point you will need to make sure DNS is correctly configured for whatever domain you used for your certificate. All should be well once you can browse to the domain and not receive any certificate errors. For example: Registering the feed on a system and publishing a test module At this point, you are good to go to start publishing scripts or modules to this feed. As an example, let\u0026rsquo;s save a module from the PowerShell Gallery and publish it to our newly created NuGet feed.\nFor a laugh we\u0026rsquo;ll demo with the Az module, which has 62 other dependent modules. As a result we will be downloading 63 modules from the PowerShell Gallery and publishing them all to our new self-hosted NuGet feed.\nRegister the NuGet feed on your system using the below. This can be from your desktop or even the server running ProGet itself. I\u0026rsquo;ll be honest.. I have no idea why we need to configure a package source before we can register the repository, but I couldn\u0026rsquo;t get it to work otherwise. More info here on that!\n# Set the URL to match your config from the section \u0026#39;Configure ProGet to use HTTPS\u0026#39; # i.e. https/http, domain and port $URL = \u0026#34;https://nuget.adamcook.io:8625/nuget/MyPrivateRepo\u0026#34; # Enter the username and password created from the section # \u0026#39;Create user for downloading modules or scripts\u0026#39; $Cred = Get-Credential $Params = @{ Name = \u0026#34;MyPrivateRepo\u0026#34; Location = $URL ProviderName = \u0026#34;NuGet\u0026#34; Credential = $Cred Trusted = $true SkipValidate = $true } Register-PackageSource @Params $Params = @{ Name = \u0026#34;MyPrivateRepo\u0026#34; SourceLocation = $URL PublishLocation = $URL ScriptSourceLocation = $URL ScriptPublishLocation = $URL PackageManagementProvider = \u0026#34;NuGet\u0026#34; Credential = $Cred InstallationPolicy = \u0026#34;Trusted\u0026#34; } Register-PSRepository @Params You will notice that if you now run Get-PackageSource and Get-PSRepository, you will see our new NuGet repository. It is important to note that this is not a system-wide config, it is per-user. In other words, if you intend to register the package source and repository, and you want another security context on the same system the leverage this registration, you will need to register them in that context. Here\u0026rsquo;s a GitHub issue requesting this to change.\nIssue the following command to grab all of the modules and save them to C:\\temp\\Az locally Save-Module -Name \u0026#34;Az\u0026#34; -Path \u0026#34;C:\\temp\\Az\u0026#34; -Repository PSGallery \u0026#x26a0;\u0026#xfe0f; If you receive an error along the lines of:\nFailed to generate the compressed file for module \u0026#39;C:\\Program Files\\dotnet\\dotnet.exe failed to pack Run the below. Close and re-open the console, then try again. Credit.\nInvoke-WebRequest -Uri \u0026#34;https://dist.nuget.org/win-x86-commandline/latest/nuget.exe\u0026#34; -OutFile \u0026#34;$env:LOCALAPPDATA\\Microsoft\\Windows\\PowerShell\\PowerShellGet\\NuGet.exe\u0026#34; \u0026#x26a0;\u0026#xfe0f; Also note that we are now explicitly specifying to use the PSGallery registered repository using the -Repository parameter, because we now have multiple registered repositories (or at least you should, the PowerShell Gallery is configured by default on most systems).\nUsing the API key we gathered from the section Create API key for publishing, we can now publish all the modules which are in the C:\\temp\\Az directory: Get-ChildItem -Path \u0026#34;C:\\temp\\Az\u0026#34; | ForEach-Object { Publish-Module -Path $_.FullName -NuGetApiKey \u0026#34;MySuperSecretAPIKey\u0026#34; -Repository \u0026#34;MyPrivateRepo\u0026#34; } After a couple of minutes, all of the modules will finish publishing. The web UI of ProGet will reflect this and we can also see this with Find-Module too. Don\u0026rsquo;t forget, you will also need to use the -Credential parameter with Find-Module. Forgot what credential to use? It\u0026rsquo;s the one we set from the section Create user for downloading modules or scripts.\nDon\u0026rsquo;t forget, you can still publish and install scripts to the NuGet feed just like you can with modules, using cmdlets like Publish-Script, Find-Script and Install-Script.\nInstalling modules or scripts from the feed Now let\u0026rsquo;s demonstrate installing a PowerShell module from our new NuGet feed. Again you can do this from your desktop or on the ProGet server itself.\nIf you\u0026rsquo;re trying this from a different system compared to the last section, registry the package source and repository again as shown in the section Registering the feed on a system and publishing a test module. It is possible to omit the the -ScriptPublishLocation and -PublishLocation parameters if you do not intend to publish modules or scripts from the system(s). Install all of the Az* modules. We need to make sure we explicitly specify our new NuGet feed using the -Repository parameter and the credential too using -Credential: $Modules = Find-Module -Name \u0026#34;Az*\u0026#34; -Repository \u0026#34;MyPrivateRepo\u0026#34; -Credential $Cred $Modules | Install-Module -Repository \u0026#34;MyPrivateRepo\u0026#34; -Credential $Cred Forgot what credential to use? It\u0026rsquo;s the one we set from the section Create user for downloading modules/scripts.\nThat\u0026rsquo;s more or less it. You just pulled a bunch of PowerShell modules from your self-hosted NuGet feed which is hosted using your own TLS certificate and protected with basic authentication via the -Credential parameter.\nBasic authentication I\u0026rsquo;ve mentioned this a few times throughout the post but not discussed it.\nThe authentication method when using the -Credential parameter used here is HTTP basic authentication. If you have configured your ProGet instance to be using only HTTP and have protected the feeds using a built-in security identity as discussed in section Create user for downloading modules or scripts, then your credentials will be sent across the wire merely encoded with base64.. this is not encryption.\nI\u0026rsquo;ve demonstrated this below; I configured the package source and repository locally on my machine (repository name is insecure). I have registered them using the HTTP endpoint of ProGet, not HTTPS. You can see the credentials in cleartext in packet tracing tools like Wireshark because they\u0026rsquo;re generally helpful and do the decoding for you in the GUI.\nEncrypting the traffic with TLS certificates is for sure the way to go. Even if the traffic is only traversing internally. Your code, and therefore the packages on your NuGet feed, should never contain sensitive information. Therefore in theory all you are protecting is the intellectual property of the code. However, let\u0026rsquo;s be real, a lot of us at the very least sometimes store things like IP addresses and stuff in our code. With that in mind, if you had a script named \u0026ldquo;Get-SecretFromCyberArk\u0026rdquo; on your NuGet feed, which contained even the most trivial information like the IP addresses of your CyberArk infrastructure, that is still an easy win of information gathering for a malicious actor. Just mooching around your network. They would have picked up the username and password of the user account, logged in to ProGet and able to view all your packages.\nUsing TLS does not provide total security, as you are still at the mercy of a simple username and password where those credentials are stored in the ProGet database. However, there is no reason why you should not be using HTTPS.\nEndpoint dependency requirements In order for other users to start using your new NuGet feed, you need to satisfy some prerequisites. I strongly recommend you:\nFor all systems you intend to use the repository from, have the latest available PackageManagement and PowerShellGet modules installed. If not already, upgrade the systems using the NuGet feed to be using PowerShell 5.1 (Windows Management Framework 5.1) as it will make your life a lot easier. After the last two items are resolved, you then need to register the package source and repository on all of the endpoints you want to leverage the repository from. We did this in section Registering the feed on a system and publishing a test module. Don\u0026rsquo;t forget that the security context of which you register the package source and repository from, is the only context in which the repositories are available. If you want to access the registered repository from another security context on the same system, you will have to re-register them in that context. Conclusion Some things which I didn\u0026rsquo;t get to touch on are LDAP authentication integration and extensions within ProGet.\nAuthenticating using Active Directory would for sure be a better and more manageable solution in terms of handling the credential when using the -Credential parameter. ProGet even provides support for gMSA accounts. You can find out more here Using LDAP/AD Integration. And even better, there is support for SAML authentication with AD and Azure AD.\nI also wanted to highlight is that it is possible to store your packages in cloud storage using ProGet\u0026rsquo;s Cloud Package Stores and extension for Azure. The would certainly reduce your on-premises storage requirements and allow greater room scalability and flexibility by using an Azure Storage Account.\nI like to use a featured image in my blog which is relevant at the time in my life while I was writing the post. For this post I started around halloween and got both lazy and busy. Here I am, 12 days before Christmas using a picture of the pumpkins my fiancé and I carved on our front porch. It\u0026rsquo;s purely because of COVID-19.. I do nothing with my spare time other than stay home. Nothing to take pictures of!\nI hope you found this information useful. If there\u0026rsquo;s any room for improvements, let me know in the comments below. If it helped you in any way, still let me know - that helps me keep going. Check out the additional resources below for further reading.\nAdditional resources What is NuGet and what does it do? Bootstrap the NuGet provider and NuGet.exe NuGet and IIS on Windows Server: The Ultimate Guide Hosting your own NuGet feeds ","date":"2020-12-13T00:00:00Z","image":"https://adamcook.io/p/hosting-and-protecting-your-own-nuget-feed-with-proget/images/cover_hu_74e821d841c1ce7b.jpg","permalink":"https://adamcook.io/p/hosting-and-protecting-your-own-nuget-feed-with-proget/","title":"Hosting and Protecting Your Own NuGet Feed with ProGet"},{"content":"Earlier in the week I presented at a WinAdmins virtual user group session. At the bottom of this post you can watch the recording.\nThe session covered the below topics:\nIntroduction to CI/CD pipeline, i.e. what it means with practical explanations A walk through on the YAML structure/syntax of GitHub Actions Demonstrate on how to use Actions by uploading content to Azure Blob storage Demonstrate a self hosted runner Demonstrate how to deploy a PowerShell module to: the PowerShell Gallery, create a new GitHub release, a shared folder within my home lab (which failed because I messed with the runner\u0026rsquo;s service log-on credential) and to a self hosted NuGet feed (which also failed, thanks demo gods) ","date":"2020-10-17T00:00:00+01:00","image":"https://adamcook.io/p/getting-your-powershell-code-into-production-using-github-actions/images/Getting-Your-PowerShell-Code-Into-Production-Using-GitHub-Actions-00_hu_a87258adbd3294b3.jpg","permalink":"https://adamcook.io/p/getting-your-powershell-code-into-production-using-github-actions/","title":"Getting Your Powershell Code Into Production Using Github Actions"},{"content":"I recently migrated my WordPress blogging platform to generating static content with Hugo. I no longer pay for hosting. I exclusively use GitHub Pages. I am now blogging at no extra cost other than domain renewal!\nIn the process not only did I learn about Hugo, but I also looked at three ways to deploy / host my Hugo-made website.\nIn this post I want to share with you what Hugo is, why I like it and those three ways that I learned on how to deploy a Hugo website - with Azure Static Web Apps (preview), Azure Blob storage and GitHub Pages.\nFor Azure Static Web Apps and Blob storage, I will be using Cloudflare. I am also assuming you will be using your own domain name. It is not a big deal if you do not want to, just ignore details focused on defining custom domains and creating CNAME records.\nThere will also be some assumptions that you are somewhat familiar with using GitHub Actions. If you are looking for an introduction to using GitHub Actions, check out my WinAdmins virtual user group event session.\nWhat is this Hugo thing? Prerequisites Azure Static Web Apps Azure Blob storage GitHub Pages Closing comments What is this Hugo thing? Hugo is a static site generator.\ncontinues to stare blankly at the screen\nYeah, I did too.\nWith static site generators (like Hugo), you forego caring about databases and any kind of code. Instead, your HTML pages are generated either each time a user visits your website, or in Hugo\u0026rsquo;s case, the HTML pages are generated each time you create or update content.\nInstead of writing your pages and blog posts in HTML files or in a feature-rich WYSIWYG editor, on some bloated content management system like WordPress, you write them in markdown (at least you do with Hugo). Then you invoke the process to generate the HTML files, using your new markdown content as source. The generated HTML files land in a very particular directory (public). After that, all you need to do is get that directory on a web hosting platform as your web root.\nWhen it comes to styling, themes are mostly plug and play, too. Fancy a new theme? No problem. Download one, drop it in the /themes directory amd update config.toml a little.\nWith static websites, no runtime is needed to run your website. Not only does this open up your hosting opportunities, performance is also another great benefit. You do not need a hosting package that sit on nginx or Apache, running PHP or whatever. For example, you can host on Azure Blob storage or GitHub Pages.\nGitHub Pages is free. It also lets you use your own domain and offers free SSL certificates via LetsEncrypt. Azure Blob Storage isn\u0026rsquo;t quite free but it costs pennies. It is £0.0144 per GB (in UK South) of storage and the first 5GB of bandwidth from this zone is free. You can do this on many more platforms too, such as Amazon S3, Netlify, Heroku, GitLab Pages, Google Cloud Storage and more.\nI am all-for ditching WordPress. Over the last few years I have grown more comfortable with Git and working on Azure and GitHub. If you relate to that or are enticed by any of the benefits, I highly recommend you at least give it a go! I would be more than happy to help, too, just get hold of me.\nHere are some resources I used to learn Hugo:\nHugo - Static Site Generator | Tutorial YouTube series Hugo Quick Start official documentation Prerequisites Before we get started, I am going to assume some things:\nYou have an Azure subscription You have a GitHub account and repository The repository contains either only your static site content, or the whole Hugo directory with your latest generated content in the public directory You get by with Git enough to be able to do things like committing/pushing Azure Static Web Apps Let us start with deploying Hugo to an Azure Static Web App. Earlier this year Microsoft announced Azure Static Web Apps and it\u0026rsquo;s currently in preview. While it\u0026rsquo;s in preview, this resource is free!\nIt boasts \u0026ldquo;deep GitHub integration\u0026rdquo;, which is true. When you create the resource and associate a GitHub repository with it, it creates a GitHub Actions workflow YAML file in your repository. It also stores a secret in the repository. The secret is used by the workflow to authenticate to Azure. This workflow, when triggered by pushing to the master branch, ships everything in the public directory up to the Azure Static Web App for you.\nThe good thing about using Azure Static Web Apps is that you essentially get Azure Blob storage and Azure Functions bundled in to one resource. This enables you to leverage the speed and flexibility of static site generators, while still being able to implement some dynamic abilities in to your website by rolling your own API via Azure Functions.\nLog in to the Azure portal Create a new Static Web App resource Fill out the typical information ie resource group, name and region Sign in to GitHub Choose your organisation / user, repository and branch For the build details, ensure you choose Hugo and that you have the \u0026ldquo;App artifact location\u0026rdquo; set to public Finalise the resource creation via Review + create and if validation succeeds, click Create Once the Static Web App resource is provisioned you will notice it created the GitHub Actions workflow YAML file in your repository. We can see from looking in the workflow file that it is using the Azure/static-web-apps-deploy action. Here\u0026rsquo;s the link to the docs for said action.\nAt this point, you will be able to see your static website live available from a HTTPS endpoint.\nTo use your own domain, go to your DNS provider set a CNAME record for www, or any subdomain you want, to your azurestaticapps.net URL as shown in your portal. After that, configure the static web app resource to point to your domain. Now your Hugo generated static website is deployed in Azure using Static Web Apps!\nAzure Blob storage Moving on to hosting your Hugo website on Azure Blob storage instead.\nNow\u0026hellip; Azure Blob storage does not do everything Azure Static Web Apps does. There is a reasonable amount of configuration to do yourself. For example, the Blob storage account is not configured to be a static website by default. There is also no bundled Azure Functions resource to integrate your own API. If you want the same CI/CD experience as with Azure Static Web Apps, you must also roll your own workflow with GitHub Actions (or similar). But that is OK! Because I will share with you my GitHub Actions workflow!\nWhy bother using it then? I guess while Static Web Apps are in preview, even though Microsoft will give you 30 days notice, they could start charging you an unpredictable rate to use it. Whereas Blob storage is a mature generally available service, its pricing model is known and it is very cost effective. Maybe you also have your own reasons to prefer Blob storage.\nLog in to the Azure portal Create a new Azure Storage account Fill out the necessary information and click the usual Review + create and Create after validation Within the \u0026ldquo;Static website\u0026rdquo; blade, enable the storage account to be a static website. Also define your index and error documents to beindex.html and 404.html. Within the \u0026ldquo;Containers\u0026rdquo; blade, select the $web container and from the top menu change its access level via the \u0026ldquo;Change access level\u0026rdquo; menu item. Set its access policy to Container. Within the \u0026ldquo;Custom domain\u0026rdquo; blade, enter your custom domain and copy the domain which ends with z33.web.core.windows.net. This will be used used to configure your CNAME record. Before clicking Save for your new custom domain, you will need to create a CNAME record with your DNS provider. Head over to Cloudflare and configure a CNAME record pointing to the z33.web.core.windows.net domain gathered from the last step. Wait a little bit (a couple of minutes?), then back in the \u0026ldquo;Custom domain\u0026rdquo; blade of the Azure portal, click Save and the domain should validate OK. At this point, all there is left to do is to upload some content to your new Blob storage $web container.\nAs a mess-around, create a file named index.html locally with some simple HTML in it which we\u0026rsquo;ll use for testing shortly. Install Azure Storage Explorer and sign in with your Azure AD account with the subscription that holds your new storage account.\n\u0026lt;html\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h1\u0026gt;Hello world\u0026lt;/h1\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Upload index.html to the $web container and check it out! It\u0026rsquo;s worth pointing out that the Azure Storage Visual Studio Code extension lets you modify files directly in your Blob container.\nWhat we really want to do is to get the similar CI/CD pipeline going on like we had with Static Web Apps.\nWithin the \u0026ldquo;Shared access signature\u0026rdquo; blade of the storage account, generate a SAS connection string with the same permissions as below screenshot. Set the start/expiration date to suit your needs. Make note of the Connection string at the bottom. From the Settings of your GitHub repository, create a secret named AZURE_STORAGE_BLOB_SAS_CONNECTIONSTRING. Create a file in your repository named azure-blob-storage.yml (or whatever you want, it really doesn\u0026rsquo;t matter, must be .yml though) in .github\\workflows directory (that does matter, it must live in there). Use the below to populate to populate its contents. name: Azure Storage Blob Container CI/CD on: push: branches: - master jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 with: submodules: true - name: Get Hugo uses: peaceiris/actions-hugo@v2 with: hugo-version: \u0026#39;0.76.3\u0026#39; extended: true - name: Build run: hugo - name: Azure Blob Upload uses: bacongobbler/azure-blob-storage-upload@v1.1.1 with: connection_string: ${{ secrets.AZURE_STORAGE_BLOB_SAS_CONNECTIONSTRING }} source_dir: public container_name: $web sync: true Commit and push to your master branch. Watch the status of the GitHub action at https://github.com/YOUR_USERNAME/YOUR_REPO/actions. If it ran successfully without any errors, you will see your Hugo website live at your domain!\nIf you want to make any more changes to your website, that\u0026rsquo;s OK! With the above workflow in your repo, every time you commit to master, the workflow will be triggered and it will upload everything that gets created in the public directory within the runner to your $web container.\nGitHub Pages Last but definitely not least is GitHub Pages. I think this is probably my favourite because it is the simplest to create and absolutely free.\nIt is worth pointing out some usage limits associated with GitHub Pages: soft bandwidth limit of 100GB p/month and repositories should ideally not be larger than 1GB.\nHugo also has some excellent docs on deploying to GitHub Pages. They suggest a reasonable idea where we have two repositories for our static website, instead of one. One contains your Hugo sources, and another - which is added as a Git submodule - holds our Hugo generated HTML content. The former would be a repository named whatever you want, for example \u0026lt;username\u0026gt;.github.io-hugo, whereas the latter would be named \u0026lt;username\u0026gt;.github.io.\nInstead of that, we are going to keep focused on using GitHub Actions. This will enable to create a pipeline for Hugo to grab content from repository, and use it to publish to another. In other words, we will still have two repositories however we will not create a submodule for the public directory.\nCreate your repository named \u0026lt;username\u0026gt;.github.io Go to the Settings of your repository Enable GitHub Pages by choosing your branch (master) and folder (/) Within the same section from the previous step, optionally enter your custom domain If you do enter a custom domain, check out this documentation on Managing a customer domain for your GitHub Pages site.\nCheck the box to Enforce HTTPS however note that this really does take ~24 hours to generate the certificate. Go to your GitHub account\u0026rsquo;s developer settings and create a Personal access token. Make note of the generated token for use in a couple of steps. Create another repository that will contain our Hugo sources, named something like \u0026lt;username\u0026gt;.github.io-hugo Go to the Settings of your newly created repository (with -hugo in the name) and create a new secret named something like \u0026lt;username\u0026gt;githubiohugo. Its value will be the personal access token we created moments ago. Clone your \u0026lt;username\u0026gt;.github.io-hugo repository to your computer and copy all of your Hugo content in to it, excluding the public directory PS C:\\git\u0026gt; git clone https://github.com/USERNAME/USERNAME.github.io-hugo.git Create publish.yml in .github/workflows and put the below content inside it, adjusting the necessary values that read \u0026lt;CHANGEME\u0026gt;: name: Publish site on: push: branch: - master jobs: publish: runs-on: ubuntu-latest steps: - name: Git checkout uses: actions/checkout@v2 - name: Update theme run: git submodule update --init --recursive # Optional if you have the theme added as submodule, you can pull it and use the most updated version - name: Setup hugo uses: peaceiris/actions-hugo@v2 with: hugo-version: \u0026#34;0.75.1\u0026#34; extended: true - name: Build run: hugo - name: Deploy uses: peaceiris/actions-gh-pages@v3 with: personal_token: ${{ secrets.\u0026lt;CHANGEME\u0026gt; }} # The name of the secret you created in step 8 external_repository: \u0026lt;CHANGEME\u0026gt; # The name of the GitHub Pages repository, e.g. \u0026lt;username\u0026gt;/\u0026lt;username\u0026gt;.github.io publish_dir: ./public user_name: \u0026lt;CHANGEME\u0026gt; # The git username used for publishing commits to external_repository user_email: \u0026lt;CHANGEME\u0026gt; # The git email address used for publishing commits to external_repository publish_branch: master cname: \u0026lt;CHANGEME\u0026gt; # If you configured your own domain, you may want to populate this with it, e.g. mine is adamcook.io. Thank you to Ruddra for inspiration of this YAML!\nAt this point, before we commit and push up to GitHub, issue hugo server within the C:\\git\\USERNAME.github.io-hugo directory and check out how things look locally by browsing to http://localhost:1313. If all looks good, commit and push. The GitHub Actions workflow will take care of the rest of: Invoking Hugo to generate content in public within the runner Publish said content to our GitHub Pages \u0026lt;username\u0026gt;.github.io repository Closing comments I hope you found this helpful!\nIf you have any questions or feedback, drop a comment below, ping me on Twitter or find me in the WinAdmins Discord (my handle is @acc)!\n","date":"2020-10-05T21:04:10+01:00","image":"https://adamcook.io/p/deploying-hugo-websites-in-azure-for-pennies-or-free-on-github-pages/images/cover_hu_8d434815027076e5.jpg","permalink":"https://adamcook.io/p/deploying-hugo-websites-in-azure-for-pennies-or-free-on-github-pages/","title":"Deploying Hugo Websites in Azure for Pennies or Free on GitHub Pages"},{"content":"I recently took ownership of becoming the organiser for my local PowerShell user group - PowerShell Southampton. With everything going on in the world re the pandemic, I realised I won\u0026rsquo;t be arranging in-person meetups any time soon.\nI got stuck in and learnt ways I could produce a virtual event: looked at the tech available and weighed up the features, pros/cons etc with how I wanted it to look and what I wanted the user experience to be for all involved (organiser, speaker and audience).\nThe research was short-lived. I quickly landed on using OBS Studio and pushing the stream to YouTube. OBS Studio seems like the de facto choice for screen recording and streaming. After some hours tinkering, reading and doing my owns streams to Twitch (archived to my YouTube channel), I was more than happy with what it offered.\nHere is some material I used to help me learn streaming: a bunch of posts by Chrissy LeMaire and a small list of resources stored on the PowerShellLive repoitory.\nThe first event is this Wednesday and I got seriously carried away this weekend with styling some \u0026ldquo;starting soon\u0026rdquo; or \u0026ldquo;be right back\u0026rdquo; animations. Below is what I\u0026rsquo;ve ended up with and I\u0026rsquo;ll quickly walkthrough how to use it if you wanted to do something similar.\nAs you can see in both scenes there is only 1 single browse source. This immediately makes my OBS setup super simple and easily to recreate if I ever need to.\nThere\u0026rsquo;s just two resources needed to get rolling with this, and they\u0026rsquo;re both in a Gist:\nindex.php New-PSSouthamptonVirtualSession.ps1 The browser source in OBS points to http://127.0.0.1/pssouthampton/index.php?message=We'll be right back, sorry for the interruption!. Pass a difference value to the message parameter in the query string of the URL to change the scrolling text.\nUnfortunately, this does depend on XAMPP (or similar) for at least a local web service and PHP. I was fixed on the idea of reading from a .json file within the HTML which contained properties such as session title, speaker name, Twitter handle, Twitter image path. I tried many ways of doing this with JavaScript but no dice, in the end I gave up and accepted the dependency.\nindex.php hugely depends on the browser\u0026rsquo;s size (or OBS\u0026rsquo;s canvas size) being 1920x1080. You\u0026rsquo;ll also notice that if you opened index.php in a browser, the positioning of some items might look different. I assume that\u0026rsquo;s either because the display scaling of my 4k display set to 175%, or the browser engine in OBS is different in some way.\nAccompanying that is a PowerShell script New-PSSouthamptonVirtualSession.ps1. This is the only thing I intend to use before each new event. I pass parameters for session title, speaker\u0026rsquo;s name and Twitter handle, and it does the rest:\nUpdates/creates properties.json with said properties, which index.php reads from Downloads the speaker\u0026rsquo;s Twitter profile and specifies its filename Starts XAMPP\u0026rsquo;s Apache if not already running This workflow makes life very easy moving forward: no need to fiddle with OBS sources and positioning them, or manipulating images in some editor. At most I\u0026rsquo;ll likely only need to fiddle with the session title\u0026rsquo;s text size and positioning in the CSS of index.php if it gets particularly lengthy.\nHere\u0026rsquo;s a quick demo of using the New-PSSouthamptonVirtualSession.ps1 and how easily it changes index.php:\n","date":"2020-07-12T00:00:00+01:00","image":"https://adamcook.io/p/starting-to-automate-animated-content-for-virtual-events/images/cover_hu_d88e6e1d1e983429.jpg","permalink":"https://adamcook.io/p/starting-to-automate-animated-content-for-virtual-events/","title":"Starting to Automate Animated Content for Virtual Events"},{"content":"In this post I\u0026rsquo;ll demonstrate how you can dynamically create resources or set properties for resources in your Azure ARM templates.\nFor example, you might have a template which accepts an array. For each element in that array, you want to create resources, or set properties for a resource.\nThe first objective will demonstrate how to create a dynamic number of properties associated with a resource. The second object will show you how to dynamically create a number of resources. The examples will revolve around creating virtual networks and subnets.\nARM template functions There\u0026rsquo;s a bunch of ARM template functions available that you do all kinds of things within your templates: logical operators and conditional actions, string manipulation, array conversions or iteration, arithmetic. All kinds of things.\nUsing functions within ARM templates make the json files more than just declarative build documents. They enable you to get creative and implement some programmable logic in to your templates. This can help with making your templates being more versatile.\nFor this demo I\u0026rsquo;ll focus on the functions [length()], [concat()] and [copyIndex()].\nObjective 1 Take the scenario where your current template contains a single virtual network resource and you hardcode the virtual network\u0026rsquo;s address space properties and all the subnets too.\nMaybe you decide to create some logic in your scripts where you dynamically create x many subnets. To do this, we are going to dynamically add x many properties to the Microsoft.Network/VirtualNetworks resource.\nLet\u0026rsquo;s start with the below example using those hardcoded properties and subnets.\n... { \u0026#34;name\u0026#34;: \u0026#34;vnet01\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Network/VirtualNetworks\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-09-01\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;uksouth\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;addressSpace\u0026#34;: { \u0026#34;addressPrefixes\u0026#34;: [ \u0026#34;192.168.0.0/16\u0026#34; ] }, \u0026#34;subnets\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;subnet-1\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;addressPrefix\u0026#34;: \u0026#34;192.168.1.0/24\u0026#34; } }, { \u0026#34;name\u0026#34;: \u0026#34;subnet-2\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;addressPrefix\u0026#34;: \u0026#34;192.168.2.0/24\u0026#34; } }, { \u0026#34;name\u0026#34;: \u0026#34;subnet-3\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;addressPrefix\u0026#34;: \u0026#34;192.168.3.0/24\u0026#34; } } ] } } ... Our objective is to dynamically create as many subnets as many there are elements in the parameter value named subnets, which is an array of just address prefixes, e.g. 192.168.1.0/24, 192.168.2.0/24 and 192.168.3.0/24.\nFor the properties object we\u0026rsquo;re going to heavily modify the subnets array and replace it with the copy element. This element enables us to create a dynamic number of resources or properties. The copy element is effectively what allows us to create a loop within our template. It accepts an array of three objects: name, count and input.\nname: the name of our loop. We use this name to reference an iterable. count: the number of times we want to iterate over our loop.input: this is where we specify values for our resource\u0026rsquo;s properties. An example of the copy element looks like the below.\n... { \u0026#34;name\u0026#34;: \u0026#34;vnet01\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Network/VirtualNetworks\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-09-01\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;uksouth\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;addressSpace\u0026#34;: { \u0026#34;addressPrefixes\u0026#34;: [ \u0026#34;192.168.0.0/16\u0026#34; ] }, \u0026#34;copy\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;subnets\u0026#34;, \u0026#34;count\u0026#34;: 3, \u0026#34;input\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;[concat(\u0026#39;subnet-\u0026#39;, copyIndex(\u0026#39;subnets\u0026#39;, 1))]\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;addressPrefix\u0026#34;: \u0026#34;[concat(\u0026#39;192.168.\u0026#39;, copyIndex(\u0026#39;subnets\u0026#39;, 1), \u0026#39;0/24\u0026#39;)\u0026#34; } } } ] } } ... The above simply creates 3 subnet with far fewer lines. You\u0026rsquo;ll notice two functions I haven\u0026rsquo;t explained yet: [concat()] and [copyIndex()]. The Microsoft docs do a good job on explaining these, but\u0026hellip;\n[concat()] allows you to concatenate arrays or strings. It accepts at a minimum of 1 argument and any number of additional arguments. You can see this is used to define a unique name for each subnet. [copyIndex()] allows you to access the position/index of the current iterable in the loop. We pass two parameters: the name of the loop \u0026ldquo;subnets\u0026rdquo; and an offset. Using the offset is what enables us to start with creating subnet names from \u0026ldquo;subnets-1\u0026rdquo; rather than \u0026ldquo;subnets-0\u0026rdquo;. You notice I use both [concat()] and [copyIndex()] for the addressPrefix property. This enables me to bump the subnet by 1 and correlate the subnet name with the 3rd subnet octet.\nThere\u0026rsquo;s definitely room for improvement here. The count object of the copy element is hardcoded at the value of 3. What we could do is leverage a parameter of our json value, which contains an array of strings for all the subnets we want in our virtual network.\nThe below is what an ideal could look like that creates subnets based on all the items in a given array:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;parameters\u0026#34;: { \u0026#34;subnetAddressSpaces\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;array\u0026#34;, \u0026#34;defaultValue\u0026#34;: [ \u0026#34;192.168.1.0/24\u0026#34;, \u0026#34;192.168.2.0/24\u0026#34;, \u0026#34;192.168.3.0/24\u0026#34; ] } }, \u0026#34;variables\u0026#34;: {}, \u0026#34;resources\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;vnet01\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Network/VirtualNetworks\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-09-01\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;uksouth\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;addressSpace\u0026#34;: { \u0026#34;addressPrefixes\u0026#34;: [ \u0026#34;192.168.0.0/16\u0026#34; ] }, \u0026#34;copy\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;subnets\u0026#34;, \u0026#34;count\u0026#34;: \u0026#34;[length(parameters(\u0026#39;subnetAddressSpaces\u0026#39;))]\u0026#34;, \u0026#34;input\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;[concat(\u0026#39;subnet-\u0026#39;, copyIndex(\u0026#39;subnets\u0026#39;, 1))]\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;addressPrefix\u0026#34;: \u0026#34;[parameters(\u0026#39;subnetAddressSpaces\u0026#39;)[copyIndex(\u0026#39;subnets\u0026#39;)]]\u0026#34; } } } ] } } ] } In the above you can see I\u0026rsquo;ve included what the parameter definition this time, and defining the array of subset address spaces by specifying defaultValue.\nWhat I\u0026rsquo;m doing differently here is using two different functions [length()] and [parameters()].\n[length()] is what enables us to gather the number of elements within an array. In this case the value will be 3. This function can also be used to get the number of characters in a string or number of root-level properties in a given object. [parameters()] is hopefully obvious, but if not it\u0026rsquo;s how we can access the value of a parameter by passing one of the template\u0026rsquo;s parameter names. What the above in mind, you can see for the count object we now have a means to loop for x many iterations for y many elements in the subnetAddressSpaces array. You\u0026rsquo;ll also notice, we\u0026rsquo;re no longer concatenating a string to create our subnets\u0026rsquo; address space. Instead we\u0026rsquo;re directly accessing the value in the array by referencing the index position of the returned by [copyIndex()]. In other words, we\u0026rsquo;re grabbing the current value of the iterable in the loop.\nHopefully now you get the sense that this whole combination of the copy element, with functions [length()] and [copyIndex()] is pretty much a for in-range loop interpreted by Azure within our json.\nObjective 2 As for the second objective (dynamically creating resources, instead of dynamically setting properties of a resource\u0026rsquo;s properties\u0026hellip; wait, what?!), there\u0026rsquo;s hardly any difference, except there\u0026rsquo;s no need to use the input object within the copy element.\nBelow I\u0026rsquo;ll share a complete working example of the template and a short PowerShell snippet to create the deployment.\n{ \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;parameters\u0026#34;: { \u0026#34;virtualNetworkAddressSpaces\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;array\u0026#34; } }, \u0026#34;variables\u0026#34;: {}, \u0026#34;resources\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;[concat(\u0026#39;vnet-\u0026#39;, copyIndex(\u0026#39;vnetloop\u0026#39;, 1))]\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Network/VirtualNetworks\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-09-01\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;uksouth\u0026#34;, \u0026#34;dependsOn\u0026#34;: [], \u0026#34;tags\u0026#34;: {}, \u0026#34;copy\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;vnetloop\u0026#34;, \u0026#34;count\u0026#34;: \u0026#34;[length(parameters(\u0026#39;virtualNetworkAddressSpaces\u0026#39;))]\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;addressSpace\u0026#34;: { \u0026#34;addressPrefixes\u0026#34;: [ \u0026#34;[parameters(\u0026#39;virtualNetworkAddressSpaces\u0026#39;)[copyIndex(\u0026#39;vnetloop\u0026#39;)]]\u0026#34; ] }, \u0026#34;subnets\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;subnet0\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;addressPrefix\u0026#34;: \u0026#34;[parameters(\u0026#39;virtualNetworkAddressSpaces\u0026#39;)[copyIndex(\u0026#39;vnetloop\u0026#39;)]]\u0026#34; } } ] } } ] } $TemplateParameters = @{ \u0026#34;virtualNetworkAddressSpaces\u0026#34; = @( \u0026#34;192.168.1.0/24\u0026#34;, \u0026#34;192.168.2.0/24\u0026#34;, \u0026#34;192.168.3.0/24\u0026#34; ) } New-AzResourceGroupDeployment -Name \u0026#34;mytestdeployment\u0026#34; -ResourceGroupName \u0026#34;rg-mytest\u0026#34; -TemplateFile \u0026#34;.\\vnet-template-2.json\u0026#34; -TemplateParameterObject $TemplateParameters With the above we\u0026rsquo;re still making use of the [length()] function for the count property of the copy element. This still enables us to loop over the number of elements within the given array. You\u0026rsquo;ll also notice we\u0026rsquo;re just creating 1 subnet for each virtual network. Each subnet\u0026rsquo;s width is the entire width virtual network\u0026rsquo;s address space.\nSummary I know these examples are really short. You\u0026rsquo;re really unlikely to want to just dynamically create subnets or virtual networks, but hopefully this is offers you insightful examples to get you a working syntax for whatever it is you\u0026rsquo;re trying to do!\nThe below Microsoft doc links may be helpful for you, if not ping me and I\u0026rsquo;ll be happy to help!\nARM Template Functions Property iteration in ARM templates ","date":"2020-07-05T00:00:00+01:00","image":"https://adamcook.io/p/creating-dynamic-azure-arm-templates/images/cover_hu_b03c7b1efa5b208a.jpg","permalink":"https://adamcook.io/p/creating-dynamic-azure-arm-templates/","title":"Creating Dynamic Azure ARM Templates"},{"content":"First post as MEMCM rather than SCCM. I think since my last post in July I\u0026rsquo;ve accumulated a dozen unfinished drafts but you know, I\u0026rsquo;ve been having too much fun.. working.\nI recently stood up a remote SUP with its own WSUS role and SQL database. What I didn\u0026rsquo;t realise was, even though it was still syncing updates from the upstream SUP/WSUS server just fine, the status messages for the SMS_WSUS_CONTROL_MANAGER component on the new SUP/WSUS box was coughing up a bunch of errors ever since I installed the roles and configured them:\nFailures were reported on WSUS Server \u0026#34;server.domain.com\u0026#34; while trying to make WSUS database connection with SQL Server Exception error code -2146232060. Possible cause: SQL Server service is not running or cannot be accessed. Solution: Verify that the SQL Server and SQL Server Agent services are running and can be contacted. WSUSCtrl.log on the new remote SUP/WSUS box was thankfully giving me more juice:\nSystem.Data.SqlClient.SqlException (0x80131904): Cannot open database \u0026#34;SUSDB\u0026#34; requested by the login. The login failed.~~Login failed for user \u0026#39;DOMAIN\\SERVER$\u0026#39;.~~ ... And again in Event Viewer under the Application log:\nLog Name: Application Source: MSSQLSERVER Date: 12/12/2019 11:59:34 Event ID: 18456 Task Category: Logon Level: Information Keywords: Classic,Audit Failure User: SYSTEM Computer: SERVER.domain.com Description: Login failed for user \u0026#39;DOMAIN\\SERVER$\u0026#39;. Reason: Failed to open the explicitly specified database \u0026#39;SUSDB\u0026#39;. [CLIENT: fe80::442f:d8a1:6d2e:757%3] Event Xml: \u0026lt;Event xmlns=\u0026#34;http://schemas.microsoft.com/win/2004/08/events/event\u0026#34;\u0026gt; \u0026lt;System\u0026gt; \u0026lt;Provider Name=\u0026#34;MSSQLSERVER\u0026#34; /\u0026gt; \u0026lt;EventID Qualifiers=\u0026#34;49152\u0026#34;\u0026gt;18456\u0026lt;/EventID\u0026gt; \u0026lt;Level\u0026gt;0\u0026lt;/Level\u0026gt; \u0026lt;Task\u0026gt;4\u0026lt;/Task\u0026gt; \u0026lt;Keywords\u0026gt;0x90000000000000\u0026lt;/Keywords\u0026gt; \u0026lt;TimeCreated SystemTime=\u0026#34;2019-12-12T11:59:34.450122600Z\u0026#34; /\u0026gt; \u0026lt;EventRecordID\u0026gt;10164\u0026lt;/EventRecordID\u0026gt; \u0026lt;Channel\u0026gt;Application\u0026lt;/Channel\u0026gt; \u0026lt;Computer\u0026gt;SERVER.domain.com\u0026lt;/Computer\u0026gt; \u0026lt;Security UserID=\u0026#34;S-1-5-18\u0026#34; /\u0026gt; \u0026lt;/System\u0026gt; \u0026lt;EventData\u0026gt; \u0026lt;Data\u0026gt;DOMAIN\\SERVER$\u0026lt;/Data\u0026gt; \u0026lt;Data\u0026gt; Reason: Failed to open the explicitly specified database \u0026#39;SUSDB\u0026#39;.\u0026lt;/Data\u0026gt; \u0026lt;Data\u0026gt; [CLIENT: 0000::0000:0000:0000:000$0]\u0026lt;/Data\u0026gt; \u0026lt;Binary\u0026gt;184800000E000000090000004400430045002D0043004D00300032000000070000006D00610073007400650072000000\u0026lt;/Binary\u0026gt; \u0026lt;/EventData\u0026gt; \u0026lt;/Event\u0026gt; Log Name: Application Source: SMS Server Date: 12/12/2019 11:59:34 Event ID: 7002 Task Category: SMS_WSUS_CONTROL_MANAGER Level: Error Keywords: Classic User: N/A Computer: SERVER.DOMAIN.com Description: On 12/12/2019 11:59:34, component SMS_WSUS_CONTROL_MANAGER on computer SERVER.DOMAIN.com reported: Failures were reported on WSUS Server \u0026#34;SERVER.DOMAIN.com\u0026#34; while trying to make WSUS database connection with SQL Server Exception error code -2146232060. Possible cause: SQL Server service is not running or cannot be accessed. Solution: Verify that the SQL Server and SQL Server Agent services are running and can be contacted. Event Xml: \u0026lt;Event xmlns=\u0026#34;http://schemas.microsoft.com/win/2004/08/events/event\u0026#34;\u0026gt; \u0026lt;System\u0026gt; \u0026lt;Provider Name=\u0026#34;SMS Server\u0026#34; /\u0026gt; \u0026lt;EventID Qualifiers=\u0026#34;49152\u0026#34;\u0026gt;7002\u0026lt;/EventID\u0026gt; \u0026lt;Level\u0026gt;2\u0026lt;/Level\u0026gt; \u0026lt;Task\u0026gt;78\u0026lt;/Task\u0026gt; \u0026lt;Keywords\u0026gt;0x80000000000000\u0026lt;/Keywords\u0026gt; \u0026lt;TimeCreated SystemTime=\u0026#34;2019-12-12T11:59:34.450122600Z\u0026#34; /\u0026gt; \u0026lt;EventRecordID\u0026gt;10165\u0026lt;/EventRecordID\u0026gt; \u0026lt;Channel\u0026gt;Application\u0026lt;/Channel\u0026gt; \u0026lt;Computer\u0026gt;SERVER.DOMAIN.com\u0026lt;/Computer\u0026gt; \u0026lt;Security /\u0026gt; \u0026lt;/System\u0026gt; \u0026lt;EventData\u0026gt; \u0026lt;Data\u0026gt;SERVER.DOMAIN.com\u0026lt;/Data\u0026gt; \u0026lt;Data\u0026gt;-2146232060\u0026lt;/Data\u0026gt; \u0026lt;Data\u0026gt;On 12/12/2019 11:59:34, component SMS_WSUS_CONTROL_MANAGER on computer SERVER.DOMAIN.com reported: \u0026lt;/Data\u0026gt; \u0026lt;/EventData\u0026gt; \u0026lt;/Event\u0026gt; Solution I checked out SQL permissions on the instance and on the SUSDB database and it appeared the local group \u0026ldquo;SERVER\\WSUS Administrators\u0026rdquo; (which contains NT AUTHORITY\\SYSTEM) is present with necessary permissions (at least when comparing with another SUP/WSUS server).\nOn the instance\u0026rsquo;s security logins ( Databases \u0026gt; Security \u0026gt; Logins), I saw NT AUTHORITY\\SYSTEM was listed. Out of curiosity, I gave it a mapping to the SUSDB and restarting the SMS_WSUS_CONTROL_MANAGER proved to be successful.\nOpen SQL Server Management Studio Connect to the SQL instance Expand Databases, Security and then Logins Right click on NT AUTHORITY\\SYSTEM and choose Properties Under User Mappings, select the SUSDB database from the central pane and delegate the \u0026ldquo;public\u0026rdquo; and \u0026ldquo;webService\u0026rdquo; role membership. From the ConfigMgr console, open the Configuration Manager Service Manager and cycle the SUP\u0026rsquo;s SMS_WSUS_CONTROL_MANAGER. ","date":"2019-12-12T00:00:00+01:00","image":"https://adamcook.io/p/configmgr-wsus-woes-cannot-open-database-susdb/images/cover_hu_f1c9ecb9c5c01d79.jpg","permalink":"https://adamcook.io/p/configmgr-wsus-woes-cannot-open-database-susdb/","title":"ConfigMgr WSUS woes, WSUSCtrl.log \"Cannot open database \"SUSDB\" requested by the login. The login failed. Login failed for user 'DOMAIN\\SERVER$'\""},{"content":"My lab recently started playing up when I noticed clients weren\u0026rsquo;t receiving any new policies.\nTDLR (it\u0026rsquo;s not even that long!): a while ago I moved my SUP/WSUS off host from the site server that also hosted a MP. As a result it seemed to have triggered this known issue: Management points stop responding to HTTP requests with error 500.19.\nFirst port of call was CcmMessaging.log to see it was at least talking to MP OK, and it wasn\u0026rsquo;t:\nSuccessfully queued event on HTTP/HTTPS failure for server \u0026#39;SCCM.acc.local\u0026#39;. Post to http://SCCM.acc.local/ccm_system_windowsauth/request failed with 0x87d00231. [CCMHTTP] ERROR: URL=http://SCCM.acc.local/ccm_system/request, Port=80, Options=224, Code=0, Text=CCM_E_BAD_HTTP_STATUS_CODE [CCMHTTP] ERROR INFO: StatusCode=500 StatusText=Internal Server Error Raising event: instance of CCM_CcmHttp_Status {ClientID = \u0026#34;GUID:A5FB49C2-955B-43EC-AE78-2BBB289FFD0F\u0026#34;;DateTime = \u0026#34;20190722130543.131000+000\u0026#34;;HostName = \u0026#34;SCCM.acc.local\u0026#34;;HRESULT = \u0026#34;0x87d0027e\u0026#34;;ProcessID = 4496;StatusCode = 500;ThreadID = 1340; }; Successfully queued event on HTTP/HTTPS failure for server \u0026#39;SCCM.acc.local\u0026#39;. Post to http://SCCM.acc.local/ccm_system/request failed with 0x87d00231. Poking around server-side in the MP logs gave me a bit of a red herring with this in MP_Status.log:\nCMPDBConnection::Init(): IDBInitialize::Initialize() failed with 0x80004005\t=======================================\tMPDB ERROR - CONNECTION PARAMETERS SQL Server Name : SCCM.acc.local\\SCCM_LOCALHOST SQL Database Name : CM_ACC Integrated Auth : True MPDB ERROR - EXTENDED INFORMATION MPDB Method : Init() MPDB Method HRESULT : 0x80004005 Error Description : Login timeout expired OLEDB IID : {0C733A8B-2A1C-11CE-ADE5-00AA0044773D} ProgID : Microsoft SQL Server Native Client 11.0 MPDB ERROR - INFORMATION FROM DRIVER null\t======================================= Certificate for client GUID:D125CE68-1CAD-4AED-9759-5ECF3842932C is revoked\tMp Status: Failed ProcessKnownProperties, error 80004005\tMP Status: processing failed for 1 event(s)\tMP StatusForwarder (the event processor) reported an error 80004005 Opening up in the browser on the server hosting the MP gave me a 500.19 status code when testing MP connectivity with http://\u0026lt;ServerName.FQND\u0026gt;/sms_mp/.sms_aut?mplist.\nAnother place to find more information of HTTP status codes for web requests is the web server\u0026rsquo;s logs! You can find where your MP\u0026rsquo;s logs are stored by:\nOpening Internet Information Services (IIS) Manager Expand Sites node Expand Default Web Site node Select SMS\\_MP node Double click logging and observe the value in the Directory field Log snippet:\n... 2019-07-22 13:11:58 192.168.175.11 CCM_POST /ccm_system_windowsauth/request - 80 - 192.168.175.16 ccmhttp - 401 2 5 1357 189 2019-07-22 13:12:32 192.168.175.11 PROPFIND /CCM_Client - 80 - 192.168.175.16 ccmsetup - 500 19 126 1357 21 2019-07-22 13:12:32 192.168.175.11 PROPFIND /CCM_Client - 80 - 192.168.175.16 ccmsetup - 500 19 126 1357 53 ... Some slightly reworded Googling thankfully landed me here: Management points stop responding to HTTP requests with error 500.19:\nLocate %windir%\\system32\\inetsrv\\config. Open the applicationHost.config file in Notepad. Look for an entry that resembles the following: \u0026lt;scheme name=\u0026#34;xpress\u0026#34; doStaticCompression=\u0026#34;false\u0026#34; doDynamicCompression=\u0026#34;true\u0026#34; dll=\u0026#34;C:\\Windows\\system32\\inetsrv\\suscomp.dll\u0026#34; staticCompressionLevel=\u0026#34;10\u0026#34; dynamicCompressionLevel=\u0026#34;0\u0026#34; /\u0026gt;` Remove the XPress compression schema by running the following command from an elevated command prompt: %windir%\\system32\\inetsrv\\appcmd.exe set config -section:system.webServer/httpCompression /-[name=\u0026#39;xpress\u0026#39;] Verify that the compression schema is removed from the applicationHost.config file, and then save the file. Run the following command from an elevated command prompt: iisreset ","date":"2019-07-22T00:00:00+01:00","image":"https://adamcook.io/p/configmgr-management-point-500.19-ccmmessaging.log-0x87d00231/images/cover_hu_cfef9635bb06fe8f.jpg","permalink":"https://adamcook.io/p/configmgr-management-point-500.19-ccmmessaging.log-0x87d00231/","title":"ConfigMgr Management Point 500.19 CcmMessaging.log 0x87d00231"},{"content":"I recently wrote a PowerShell script that reports on what folders are used or unused by System Center Configuration Manager.\nYou can find it here on GitHub.\nI demoed the script at the London Windows Manager User Group, you can watch the recording below on YouTube.\nAmong a bunch of technical topics, I also learned a huge amount of valuable soft/personal things too.\nEnough is enough I started writing in February and since then I have refactored the script and the idea itself many times. It\u0026rsquo;s now start of July and my girlfriend and I want to do fun summer things.\nI first \u0026ldquo;finished\u0026rdquo; it in April but it was only good if you stored your content sources on a local file system to a site server. I realised that a lot of admins store their sources on a remote file server. I tried to \u0026ldquo;drop in\u0026rdquo; accommodations for that but quickly accepted that I was creating fragile flows and hard to read solutions. This is one of the few times I did a \u0026ldquo;sod it, let\u0026rsquo;s start again\u0026rdquo;.\nAround end of May I had a working script that worked with local and remote storage but it was slow. Because it iterates over all gathered folders, and for each folder, iterates over all ConfigMgr content objects, this meant in an environment with 10k content objects and 70k folders it took over 10 hours. I did my best to create efficiencies, such as to skip sub folders of folders already marked as \u0026ldquo;Not used\u0026rdquo; but 10 hours really wasn\u0026rsquo;t good.\nI was almost at a cross roads because I thought performance shouldn\u0026rsquo;t matter too much because this would be something that you would run once in a blue moon. Then I had that fear of posting online and being bombarded with \u0026ldquo;you should have done x instead of y\u0026rdquo;. I\u0026rsquo;m still at risk of that now (which I welcome by the way, especially for something I\u0026rsquo;m not aware of). It was that thought which pushed me to strive for better because I knew I could improve, and the improvement would be significant enough.\nThen I discovered runspaces! Sure, I could have used jobs or set a dependency to use the PoshRSJobs module, but I just wanted to learn runspaces, how to use them and not have a dependency on another module. So of course doing that added more time and testing. It was worth it though.\nThroughout the entire time I kept tricking myself in to thinking \u0026ldquo;I\u0026rsquo;m almost done\u0026rdquo;, \u0026ldquo;it\u0026rsquo;ll be done next week\u0026rdquo;, \u0026ldquo;it\u0026rsquo;s pretty much there\u0026rdquo;. Constant ideas and learning new techniques for readability / performance was another time killer.\nI once read that managers hate dealing with sysadmins because when tasked to create something they don\u0026rsquo;t lay the foundations first and get too hung up on the features. I sometimes saw myself as that guy.\nWith that said, I also felt like I had a short fuse with it and had a just want to get it done mindset. While working on this I started reading (and still yet to finish) Zen and the Art of Motorcycle Maintenance. In the early chapters there was emphasis on people rushing their day-to-day and the work they do. I related to this and was guilty of that at times with this script. So then I decided if I\u0026rsquo;m not taking my time to do it carefully then there\u0026rsquo;s no point doing it at all.\nAt this point, summer had started, and for me that means cricket season was on and while working during the week, that meant every Saturday was spent all day playing cricket. I just wanted to spend every free hour I had finishing this but conscious of not wanting to rush it.\nI had to draw a line in the sand and make my primary goal to finish it with what I had. I wanted to spend time with my girlfriend, finish that book, learn something else, fix that broken window motor on my car I\u0026rsquo;ve had for over a year, play a few nights of CS:GO.\nI really enjoyed the brain teasers though. I would be drawing what the flow of some loops looked like in my head while in the shower or driving. Realistically, I know I\u0026rsquo;m never finished. No doubt if someone found an issue with it I\u0026rsquo;ll be curious and want to fix it.\nI think next time I have some idea that I think will consume any amount of time, I should first spend time thinking about what the end product looks like. Then I should be able to know at what point I would want to bring the idea/project to a close.\nAsking for help A buddy Chris Kibble shared with me How to ask questions the smart way. Looks like this is a link from the The XY Problem.\nThese two resources are huge and you may not yet appreciate why. I\u0026rsquo;ve been on the Internet for as long as I can remember and have been posting stupid questions for most of my life.\nWhen I became mindful of the XY problem and how to ask for help from people who are insanely good, I became much more effective. A part of asking for help is about first having a bloody good stab at it yourself. This often pushed me to not necessarily find the answers but to help me think more inquisitively. I found as a result of asking myself more questions, it enabled me to understand problems better and sometimes find the answer.\nI\u0026rsquo;m in the Windows Admins Discord and some people in there are not only clever but incredibly helpful. Just to say thanks to people who helped me:\nCody Mathis (@codymathis123) Chris Kibble (@ChrisKibble) Chris Dent (@idented-automation) Kevin Crouch (@PsychoData) Patrick (the guy who wrote MakeMeAdmin) PowerShell So I wrote a PowerShell script and I learnt some PowerShell. Of course. Well here\u0026rsquo;s what I learnt.\nDebugger VSCode is my go-to editor. One of the many reasons why I like it, is because of the debugger with the integrated console. Once I read Life after Write-Debug by Stephen Owen I became much more efficient at troubleshooting my dodgy code.\nMy first manager as an IT professional told me what the rubber duck debugging technique was. So when you discover something like debugging for PowerShell in ISE or VSCode, it\u0026rsquo;s a huge sense of relief. The days of intensely reading line by line or including a bunch of print / Write-Host commands are long gone.\nYou can also debug other running PowerShell processes!!\nStandards and formatting For quick scripts with limited audience, I make some effort to format and make it at least somewhat presentable for next the guy. However taking on something fairly big that might be used and read by the Internet, I appreciated the need for not necessarily pretty looking code because that\u0026rsquo;s always subjective but at least adopt a standard.\nAgain, kind of tapping in to that fear of posting online and being scrutinized.\n.NET I had always known you could do .NET stuff within PowerShell and whenever I saw it in conversation it was always in the context of scale or performance. I figured with what I was trying to achieve, this would fall under the scale category. I\u0026rsquo;ve seen environments with tens of thousands of ConfigMgr content objects and hundreds of thousands of folders.\nThere\u0026rsquo;s a need here as well to determine when to draw the line. You could spend a long time looking up .NET classes and methods when you could save yourself a lot of time just using PowerShell cmdlets. Sure, there\u0026rsquo;s a performance gain, but as a sysadmin I think readability and time is more valuable. So I won\u0026rsquo;t make a habit of using them at every opportunity, purely because I\u0026rsquo;m not fluent enough. Being mindful of them and how or when to use them is good enough.\nEnumerateDirectories() On the topic of .NET, I did really want to use the EnumerateDirectories method from the Directory class. But it falls short where you can\u0026rsquo;t control its on-error preference. No matter what, on error, it\u0026rsquo;s a terminating error. Which sucked because it sounded perfect for what I wanted and crazy fast too.\nIn situations where it encountered a folder with access denied, it wouldn\u0026rsquo;t continue.\nRunspaces As I mentioned already I wanted concurrent processing so I had several options already available like jobs or the PoSHRSJobs module that gives you runspaces but with the familiar syntax as jobs. I opted to use runspaces and spend extra time learning more about how they work for two reasons mainly:\nTo use something and make a good effort at actually trying to understand it. Not set too many dependencies on modules. I had a train of thought that not all ConfigMgr admins are full time admins, no doubt they juggle other stuff too. What if they really wanted to just clean up their storage but wasn\u0026rsquo;t 100% sure how to install a third party module just to get the basics working? Collections the += operator I saw this thread on Twitter and my mind was blown how much quicker it was to not use the += operator when adding new elements to an array.\nJoin-Path I guess I saw some quirks of PowerShell along the way but the biggest for me that sticks out in memory was Join-Path. Check out this madness:\nPS C:\\\u0026gt; Test-Path \u0026#34;\\\\fakeserver\\fakeshare\\fakefolder\u0026#34; False PS C:\\\u0026gt; Join-Path -Path \u0026#34;\\\\fakeserver\u0026#34; -ChildPath \u0026#34;fakeshare\u0026#34; | Join-Path -ChildPath \u0026#34;fakefolder\u0026#34; \\\\fakeserver\\fakeshare\\fakefolder Join-Path copes well with paths that don\u0026rsquo;t exist. Awesome.\nPS C:\\\u0026gt; Test-Path \u0026#34;K:\\I\\Do\\Not\\Have\\K\\Drive\\Mapped\u0026#34; False PS C:\\\u0026gt; Join-Path -Path \u0026#34;K:\\\u0026#34; -ChildPath \u0026#34;I\\Do\\Not\\Have\\K\\Drive\\Mapped\u0026#34; Join-Path : Cannot find drive. A drive with the name \u0026#39;K\u0026#39; does not exist. At line:1 char:1 + Join-Path -Path \u0026#34;K:\\\u0026#34; -ChildPath \u0026#34;I\\Do\\Not\\Have\\K\\Drive\\Mapped\u0026#34; + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (K:String) [Join-Path], DriveNotFoundException + FullyQualifiedErrorId : DriveNotFound,Microsoft.PowerShell.Commands.JoinPathCommand \u0026hellip;unless it\u0026rsquo;s a local path on a drive letter that doesn\u0026rsquo;t exist. Worth remembering. You can\u0026rsquo;t really say Join-Path validates path as it joins them, because it doesn\u0026rsquo;t, nor can you say its sole purpose is to verify just \u0026ldquo;syntax\u0026rdquo; or \u0026ldquo;validity\u0026rdquo; of path structure, because it isn\u0026rsquo;t.\nLooks like there\u0026rsquo;s an open issue on this.\nRegex Early doors I was trying to grab the server, share and remainder share from a string: \\\\server\\share\\folder\\folder.\nCody Mathis threw me a little snippet of regex and from then on my eyes were opened. I\u0026rsquo;m even thinking of going back to a previous project PoSH Hyper Cloud to use more regex and maybe even jobs! regex101.com is my go to for testing expressions.\nEnumerations Patrick Seymour recently wrote Test-FileSystemAccess and I enhanced it a little to return exit codes and determine if elevation is required. Some feedback by Chris Dent taught me about enums! They\u0026rsquo;re awesome. An excellent way to create a relationship between numbers and strings. I used the below resources to implement an enum in to Test-FileSystemAccess:\nWorking with enums in PowerShell Creating enums in PowerShell Tip! If you want to get the full Parameter Type of a parameter that\u0026rsquo;s an enum, so you can use GetNames:: on it, the below will help:\n(Get-Command Set-ExecutionPolicy).Parameters[\u0026#39;ExecutionPolicy\u0026#39;] | Select -ExpandProperty ParameterType DataTables and filtering I recently tackled an issue with the script where collections of large scale and filtering them was an issue. Chris Kibble once showed me DataTables and I remember a remark about them being quick. He wasn\u0026rsquo;t joking! Checkout the below!\n$winFiles = Get-ChildItem c:\\windows -Recurse -ErrorAction SilentlyContinue $commands = @{ \u0026#39;Where-Object\u0026#39; = { $exeFiles = $winFiles | Where-Object { $_.Extension -eq \u0026#34;.exe\u0026#34; } } \u0026#39;Where-Object (no script block)\u0026#39; = { $exeFiles = $winFiles | Where-Object Extension -eq \u0026#34;.exe\u0026#34; } \u0026#39;.Where\u0026#39; = { $exeFiles = $winFiles.Where{ $_.Extension -eq \u0026#34;.exe\u0026#34; } } \u0026#39;DataTable\u0026#39; = { $fileTable = New-Object System.Data.DataTable $fileTable.TableName = \u0026#34;AllMuhFiles\u0026#34; [void]$fileTable.Columns.Add(\u0026#34;FileName\u0026#34;) [void]$fileTable.Columns.Add(\u0026#34;Parent\u0026#34;) [void]$fileTable.Columns.Add(\u0026#34;Extension\u0026#34;) [void]$fileTable.Columns.Add(\u0026#34;IsInf\u0026#34;) ForEach($file in $winFiles) { [void]$fileTable.Rows.Add($file.Name, $file.Parent, $file.Extension) } $exeFiles = $fileTable.Select(\u0026#34;Extension like \u0026#39;.exe\u0026#39;\u0026#34;) } \u0026#39;foreach\u0026#39; = { $exefiles = foreach ($file in $winFiles) { if ($file.Extension -eq \u0026#39;.exe\u0026#39;) { $file } } } } $commands.Keys | ForEach-Object { $testName = $_ 1..3 | ForEach-Object { [PSCustomObject]@{ TestName = $testName Attempt = $_ ElapsedMS = (Measure-Command -Expression $commands[$testName]).TotalMilliseconds } } } Results:\nTesting Where-Object 2849.362 2926.2761 2876.9806 Testing .Where 1045.712 1037.4888 1031.191 Testing DataTable select 280.6466 272.8503 272.5341 I started looking in to DataTables because on large collections (hundreds of thousands sort of size) Where-Object was slow. While the Where() method was faster, both were nothing compared to DataTables.\nWith that said, if a cmdlet has a -Filter parameter available to let you work directly with a provider, then use that. \u0026ldquo;Keep left\u0026rdquo; or \u0026ldquo;filter left, format right\u0026rdquo; are phrases I learnt and try to keep in mind when filtering.\nI abandoned the use of DataTables in the end as I didn\u0026rsquo;t need it. Even though they need additional set up, as you can see from the previous snippet, the results might be worth it in some cases.\nResources I used to learn about filtering and the performance impact\nExplains where-object and where, and the -filter for provider specific stuff Explains the object type returned by where-object and where Talks about \u0026ldquo;keeping left\u0026rdquo; for best performance. -Filter is king if available but only if you don\u0026rsquo;t want all the objects in an array for later consumption ","date":"2019-07-07T00:00:00+01:00","permalink":"https://adamcook.io/p/get-cmunusedsources/","title":"Get CMUnusedSources"},{"content":"Currently my lab\u0026rsquo;s primary site is running on 2012 R2. I recently wanted to try LEDBAT so I could sell it to a customer but it needed to be on 2016 or newer. I could have configured a 2016 \u0026ldquo;source DP\u0026rdquo; using pull distribution points and still get the experience, but then I was curious if IPU while ConfigMgr is installed was possible, turns out it is. Then I discovered I had SQL 2012 SP2 installed! So I bumped it to SP4. Then I thought, why not go to 2017?\nI\u0026rsquo;ve applied service packs to SQL for ConfigMgr before but never a major release upgrade. Turns out, it\u0026rsquo;s straight forward!\nThese are the resources I found useful and guided me:\nHardware and Software Requirements for Installing SQL Server - Microsoft Supported Version and Edition Upgrades - Microsoft Supported SQL Server versions for Configuration Manager - Microsoft Configuring reporting in System Center Configuration Manager - Microsoft Upgrading your SCCM site database - Paul Winstanley SCCM 1606 Upgrade SQL Server 2014 To SQL Server 2016 -Prajwal Desai Patching SQL Server with SCCM - Octavian Cordos The Complete Guide for SCCM Server Migration Part 1 – SQL 2017 - Rajul OS Overview We\u0026rsquo;ll migrate custom reports, remove the Report Services Point role, install SSRS 2017, upgrade to SQL Server 2017, reinstall the Report Services Point role, and finally talk about anti-virus exclusions and new CE levels for your ConfigMgr db.\nPlease don\u0026rsquo;t hate my SQL instance name. I started the environment off trying out a ConfigMgr database move so at the time I wanted the instance name to be distinctively different!\nSQL Server Reporting Services Upgrade SSRS is now a separate installer and the installation of SQL Server 2017 will remove any bundled SSRS components you have installed.\nSo, how do we handle our current reports and ReportServer database?\nReport migration If you google \u0026ldquo;sql reporting services migration\u0026rdquo; you\u0026rsquo;ll quickly learn many people have had a stab at this. Below are a few I\u0026rsquo;ve put together, though honestly, if you have a handful, just export the .rdl files manually and be done with it.\nCraig Porteous has a great write up talking about various ways to tackle this, strongly encourage you to read the post. Be sure to check out his method of tackling the problem using mostly PowerShell and the dbatools module. Reporting Services Migration Tool which is Microsoft\u0026rsquo;s own attempt at being a solution, made back in 2012 and not been touched since. As Craig mentions in his post, it\u0026rsquo;s written in PoSH and by default the target instance must be a SharePoint integrated instance - apparently easily modifiable to point to a regular SQL instance. RS.exe ssrs_migration.rss script is another Microsoft attempt, which looks like a more recent attempt. ReportSync which is an open source tool under MIT license, seemingly popular however it\u0026rsquo;s old and unloved SCCM Report Manager Tool Reporting Service Point role After you upgrade SQL Server, and SQL Server Reporting Services that is used as the data source for a reporting services point, you might experience errors when you run or edit reports from the Configuration Manager console. For reporting to work properly from the Configuration Manager console, you must remove the reporting services point site system role for the site and reinstall it. However, after the upgrade you can continue to run and edit reports successfully from an Internet browser.\nConfiguring Reporting - Upgrading SQL Server - Microsoft\nWith that in mind, you may as well remove the RSP role before you start. There\u0026rsquo;s no harm in doing this after the upgrade, you\u0026rsquo;ll just get a moaning RSP component in console is all because it can\u0026rsquo;t start the SSRS service tied to the RSP that the install for SQL Server 2017 removed.\nCheckout srsrp.log, srsrpMSI.log, srsrpsetup.log and sitecomp.log if there are any issues with this.\nInstalling SSRS 2017 Once you\u0026rsquo;ve decided what you want to do with your custom reports and removed the Reporting Service Point role, we can start installing SSRS 2017. You\u0026rsquo;ll need your SQL Server product key, check out How to find the product key for SQL Server 2017 Reporting Services.\nAfter install for SSRS 2017 you\u0026rsquo;ll have the option in configuration to create a new database or choose an existing one. As your current SSRS instance still exists you must create a new database with a unique name and also create a web service point with a unique name too.\n\u0026#x2139;\u0026#xfe0f; Note: After entering data in any section ensure you hit Apply at the bottom.\nSQL Server Upgrade Now we can get started upgrading the DB engine.\nStart off by stopping and disabling all ConfigMgr services until we\u0026rsquo;re ready. I disabled mine in case I needed to reboot and didn\u0026rsquo;t want to run the risk of ConfigMgr running on an unsupported SQL backend in fear of the gremlins.\n$Services = \u0026#34;SMS_EXECUTIVE\u0026#34;, \u0026#34;SMS_NOTIFICATION_SERVER\u0026#34;, \u0026#34;SMS_SITE_BACKUP\u0026#34;, \u0026#34;SMS_SITE_COMPONENT_MANAGER\u0026#34;, \u0026#34;SMS_SITE_SQL_BACKUP\u0026#34;, \u0026#34;SMS_SITE_VSS_WRITER\u0026#34;, \u0026#34;CONFIGURATION_MANAGER_UPDATE\u0026#34; ForEach ($Service in $Services) { Write-Host \u0026#34;$($Service): $((Get-Service -Name $Service).StartType)\u0026#34; Stop-Service -Name $Service -Force Set-Service -Name $Service -StartupType Disabled } Installing SQL Server 2017 Run setup.exe in the installation media and work your way through the wizard:\nI want to point out that I ran through this process three times and played hot potato with the \u0026ldquo;Use Microsoft Update to check for updates (recommended)\u0026rdquo; and I found it did not install any cumulative updates.\nRestart \u0026#x1f604;\nAt this point I\u0026rsquo;m at RTM build number:\nWhich at the time of writing this, that is not supported for ConfigMgr. Minimum CU2 is needed for SQL Server 2017:\nSupported SQL Server versions for Configuration Manager - Microsoft\nRun through the CU wizard, for me this was CU 13 which took me to 14.0.3048:\nRestart \u0026#x1f604;\nRestore the services startup types back to what they were and do one more final reboot to ensure all needed services start normally:\n$Services = \u0026#34;SMS_EXECUTIVE\u0026#34;, \u0026#34;SMS_NOTIFICATION_SERVER\u0026#34;, \u0026#34;SMS_SITE_BACKUP\u0026#34;, \u0026#34;SMS_SITE_COMPONENT_MANAGER\u0026#34;, \u0026#34;SMS_SITE_SQL_BACKUP\u0026#34;, \u0026#34;SMS_SITE_VSS_WRITER\u0026#34;, \u0026#34;CONFIGURATION_MANAGER_UPDATE\u0026#34; ForEach ($Service in $Services) { If ((\u0026#34;SMS_NOTIFICATION_SERVER\u0026#34;, \u0026#34;SMS_SITE_BACKUP\u0026#34;) -contains $Service) { Set-Service -Name $Service -StartupType Manual } Else { Set-Service -Name $Service -StartupType Automatic } } If there are issues with the site connecting to the database, you will most likely see messages of interest in:\nLog file name Description smsprov.log Records WMI provider access to the site database. smsdbmon.log Records database changes. statmgr.log Records the writing of all status messages to the database. Log files for troubleshooting\nReinstall Report Services Point role Now you should be good to go by adding back the RSP!\nAgain, checkout srsrp.log, srsrpMSI.log, srsrpsetup.log and sitecomp.log if there are any issues with this.\n\u0026#x2139;\u0026#xfe0f; Note: Reports may take a little while to appear in console, you can monitor the progress of them being generated in srsrp.log.\nPost SQL upgrade tasks Antivirus exclusions Slightly off-topic, but bookmark this link right now.\nBack on-topic. This is somewhat obvious but can easily be forgotten: update your exclusion rules to reflect new paths for SQL Server + SSRS 2017!\nConfiguration Manager Current Branch Antivirus Exclusions - Microsoft Cardinality Estimation I recently saw a tweet by Umair Khan:\nConfigMgr Current branch (1810+) guidance for the SQL CE levels with various SQL versions https://t.co/0M2lryd3FJ#ConfigMgr #CardinalityEstimation #SQL\n\u0026mdash; 𝚄𝚖𝚊𝚒𝚛 𝙺𝚑𝚊𝚗 [𝚄𝙺] (@UmairMSFT) January 30, 2019 The referenced blog post taught me what Cardinality Estimation is and what Umair\u0026rsquo;s guidance means. I highly encourage you to read it if you\u0026rsquo;ve just updated from an old SQL version to 2017.\nAs a result you may want to update your ConfigMgr DB backend CE level to something like 140. As I upgraded away from 2012, my ConfigMgr DB and new ReportServer2017 DB was CE level was 110.\nYou can read up more on Cardinality Estimation and guidance on how to change your CE level here:\nCardinality Estimation (SQL Server) - Microsoft Final comments I started off with SSRS first before SQL Server because I didn\u0026rsquo;t want anyone following this post having removed their SSRS instance with a boat load of custom reports without realising the impact. I figured if I made the needed warnings first then it would be safer for you.\nThe added benefits of doing SSRS after SQL upgrade is, I guess, are:\nthe CE level for ReportServer DB would default to latest the ability to use the current web URLs and maybe even database name too, if you\u0026rsquo;re brave enough to delete/rename the original ","date":"2019-03-20T00:00:00+01:00","image":"https://adamcook.io/p/configmgr-sql-server-upgrade-from-2012-sp4-to-2017-cu13/images/cover_hu_efbc255c62811514.jpg","permalink":"https://adamcook.io/p/configmgr-sql-server-upgrade-from-2012-sp4-to-2017-cu13/","title":"ConfigMgr SQL Server Upgrade From 2012 SP4 to 2017 CU13"},{"content":"I run LibreNMS in my homelab on a Ubuntu Server VM and it\u0026rsquo;s awesome.\nFor a while I\u0026rsquo;ve been getting notifications for needing to bump from PHP 7.0 to 7.2 minimum.\nI mostly followed this post but I\u0026rsquo;ll detail a step that helped me automate installing and uninstalling all the extra modules which the last two steps seem to miss. I\u0026rsquo;ll also detail a snag I hit with nginx slapping me with 500 after updating to PHP 7.2.\nAdd PPA Get current packages/modules Install 7.2 Install additional packages/modules Remove old packages/modules Add PPA Yet another PPA, sorry. However it does look like Ondřej Surý is a reputable Debian developer:\nhttps://twitter.com/oerdnj https://deb.sury.org https://github.com/oerdnj sudo add-apt-repository ppa:ondrej/php sudo apt update [su_note note_color=\u0026quot;#FFFCCC\u0026quot; radius=\u0026ldquo;10\u0026rdquo;]Important: Make sure you read the caveats on the PPA archive webpage, you\u0026rsquo;re recommended to add seperate archives depending whether you\u0026rsquo;re running apache2 or nginx.[/su_note]\nGet current packages/modules This is helpful to identify what additional PHP packages you currently have installed so you can install the 7.2 equivilant too.\nWorth pointing out I noticed the mcrypt package was no longer a thing in 7.2 however LibreNMS did not moan at all without it. Their current installation docs does not install mcrypt package so that tells me it\u0026rsquo;s no longer a depedency.\ndpkg -l | grep php Install 7.2 apt install php7.2 php7.2-common php7.2-cli php7.2-fpm Install additional packages/modules This bit was where I wanted to flesh it out a bit. I wanted the same modules I had for 7.0 in 7.2.\nI already got the stdout stream of what\u0026rsquo;s installed from dpkg -l | grep php and I really didn\u0026rsquo;t want to re-write the results to apt install or any sort of copy and pasting. I wanted to make the effort to manipulate the stdout and this is what I came up with:\ndpkg -l | grep php | cut -d\u0026#39; \u0026#39; -f3 | grep 7.0 | sed \u0026#39;s/7.0/7.2/g\u0026#39; | awk \u0026#39;{print \u0026#34;apt install \u0026#34; $1 \u0026#34; -y\u0026#34;}\u0026#39; | xargs -0 bash -c Grab all that\u0026rsquo;s installed Keyword search for results with \u0026ldquo;php\u0026rdquo; Seperate each line with a space as delimiter and grab the third field Search for only packages with \u0026ldquo;7.0\u0026rdquo; in string Substitute strings containing \u0026ldquo;7.0\u0026rdquo; with \u0026ldquo;7.2\u0026rdquo; For each result create a custom string that I will then execute as a command using xargs Remove old package/modules More or less the same as the last. I now wanted to remove PHP 7.0.\nWith most things, you can achieve the same result using other methods. I made the below for no reason other than laziness. All I had to do was hit UP to get the previous command and tweak it slightly to remove --purge rather than install.\ndpkg -l | grep php | cut -d\u0026#39; \u0026#39; -f3 | grep 7.0 | awk \u0026#39;{print \u0026#34;apt remove --purge \u0026#34; $1 \u0026#34; -y\u0026#34;}\u0026#39; | xargs -0 bash -c The 500 snag So after updating, rebooting and running ./daily.sh and ./validate.php all appeared well in the results. However the web server was failing with status code 500. A good place to start, if I\u0026rsquo;ve learnt anything with ConfigMgr over the last 3 years, is always the log /var/log/nginx/error.log:\n2019/03/16 13:17:16 [crit] 1103#1103: *1 connect() to unix:/var/run/php/php7.0-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.0.129, server: librenms.domain.com, request: \u0026#34;GET / HTTP/1.1\u0026#34;, upstream: \u0026#34;fastcgi://unix:/var/run/php/php7.0-fpm.sock:\u0026#34;, host: \u0026#34;192.168.0.104\u0026#34; Looked like the web service was still trying to grab a socket file but it didn\u0026rsquo;t exist. I poked around a little in /etc/nginx and within /etc/nginx/conf.d/librenms.conf sure enough there was a hardcoded reference using the offended path. I simply updated it to 7.2 instead.\n... location ~ \\.php { include fastcgi.conf; fastcgi_split_path_info ^(.+\\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; } ... ","date":"2019-03-16T00:00:00+01:00","permalink":"https://adamcook.io/p/upgrading-to-php-7.2-on-ubuntu/","title":"Upgrading to PHP 7.2 on Ubuntu"},{"content":"\nGlorious, 0x80004005.\nNo updates showing in Software Center? Is a client reporting in an unknown state in your reports?\nDo you see this in UpdatesDeployment.log when triggering Software Update Deployment Evaluation Cycle?\nJob error (0x80004005) received for assignment ({09985588-0eff-47e1-9f9d-3846efce8c2b}) action\tUpdates will not be made available Give this a whirl:\nMove-Item -Path \u0026#34;$env:SystemRoot\\System32\\GroupPolicy\\Machine\\Registry.pol\u0026#34; -Destination \u0026#34;$env:SystemRoot\\System32\\GroupPolicy\\Machine\\Registry.pol.old\u0026#34; Invoke-GPUpdate -Force Verify a new Registry.pol file is created, then trigger Software Update Deployment Evaluation Cycle and Software Updates Scan Cycle.\nHow did I discover? This poor soul. Why am I sharing again? Because when you\u0026rsquo;re given 0x80004005 you may feel at wit\u0026rsquo;s end and struggle for direction - I\u0026rsquo;m just contributing to the Google fu pool.\n","date":"2019-03-01T00:00:00+01:00","image":"https://adamcook.io/p/updatesdeployment.log-job-error-0x80004005/images/cover_hu_1e40537857d9cb6b.jpg","permalink":"https://adamcook.io/p/updatesdeployment.log-job-error-0x80004005/","title":"UpdatesDeployment.log Job Error (0x80004005)"},{"content":"Want to get Windows Update status on a bunch of machines, only to discover the InstalledOn property is blank for lots of updates when you\u0026rsquo;re using Get-HotFix or querying Win32_QuickFixEngineering class with Get-WmiObject?\nTurns out it\u0026rsquo;s related to PowerShell parsing the date as string to datetime object incorrectly, depending what your system locale is.\n$Updates = Get-HotFix | Select-Object description,hotfixid,installedby,@{l=\u0026#34;InstalledOn\u0026#34;;e={[DateTime]::Parse($_.psbase.properties[\u0026#34;installedon\u0026#34;].value,$([System.Globalization.CultureInfo]::GetCultureInfo(\u0026#34;en-US\u0026#34;)))}} $Updates | Sort InstalledOn -Descending | Select -First 10 I like the above approach, as posted by user pyro3113, because the select expression lets you easily query remote systems using Get-HotFix or Get-WmiObject.\nThese two solutions use the Microsoft.Update.Session COM object. The Windows Update PowerShell Module creates the COM object on a remote host whereas the post by britv8 shows just enough to get you going executing on local host:\nWindows Update PowerShell Module Getting Windows Updates installation history shared by Spiceworks member britv8 Interestingly, using the below command in cmd doesn\u0026rsquo;t suffer from this data type parsing issue. It\u0026rsquo;s just spat out as is with no interpretation.\nwmic qfe ","date":"2019-02-18T00:00:00+01:00","image":"https://adamcook.io/p/blank-installedon-property-for-get-hotfix-and-win32_quickfixengineering/images/cover_hu_984732f915f6a99b.jpg","permalink":"https://adamcook.io/p/blank-installedon-property-for-get-hotfix-and-win32_quickfixengineering/","title":"Blank InstalledOn Property for Get-HotFix and Win32_QuickFixEngineering"},{"content":" \u0026#x2139;\u0026#xfe0f; It looks like as I was writing this Microsoft have just publicly released a former internal document detailing some things that I had claimed to be undocumented (click here):\nIn this post I\u0026rsquo;ll cover ways you can control bandwidth for Configuration Manager using the rate limit config available for each distribution point. Outside of rate limits, a modern approach to traffic shaping content distribution to DPs is LEDBAT. Read this great write up by Daniel Olsson for more LEDBAT info in this use case.\nHow familiar are you with the difference between the options \u0026ldquo;pulse mode\u0026rdquo; and \u0026ldquo;limited to specified maximum transfer per hour\u0026rdquo;? I wasn\u0026rsquo;t sure up until recently, and like me you may have misunderstood the % you\u0026rsquo;re applying for each hour slot, when you may want pulse mode instead.\nPulse mode Pulse mode is simple. Pump out x sized data block every y seconds. Unfortunately, the maximum data block size is 256KB - enabling you to set a maximum cap of 2Mbps - so I support \u0026lsquo;Increase block size for pulse mode (distribution point rate limit)\u0026rsquo;, do you?\nLimited to specified maximum transfer per hour As for \u0026ldquo;limited to specified maximum transfer per hour\u0026rdquo;\u0026hellip; Turns out the % you apply for each hour is strictly relevant to how long for Configuration Manager can send using 100% available bandwidth, following by the how long Configuration Manager will not transmit data for, during that hour slot. The documentation explains this well.\nHowever it does not explain the behaviour. For example, if you configured it to be 50/50, does this mean it\u0026rsquo;s pumping content flat out for 30 consecutive minutes of the hour and dead quiet for the other 30 conseuctive minutes? Perhaps that\u0026rsquo;s too much detail to care for and simply knowing that the DP will go full-pelt for 50% of the hour is good enough for you. But I was curious.\nIn my lab, I set a distribution point\u0026rsquo;s \u0026ldquo;limit to specified maximum transfer per hour\u0026rdquo; to 10%. On the SCCM VM I set a bandwidth maximum limit of 10Mbps, in Hyper-V this is only for outbound traffic but that\u0026rsquo;s OK it\u0026rsquo;s what I\u0026rsquo;m testing. The bandwidth limit is mostly so I can transfer a 12GB package and make it run for long enough to watch what\u0026rsquo;s going on.\nWith perfmon I wanted to cover 3600 seconds, and being limited to just 1000 samples, I could only go down to 1 sample every 4 seconds.\nAfter a few minutes of watching perfmon, task manager and resource monitor, it became clear that perfmon\u0026rsquo;s graph is not a true reflection. While it shows the behaviour, but it doesn\u0026rsquo;t show that Configuration Manager was actually pulsing at 10Mbps (or near abouts). The perfmon graph, and the calculations made by SMS_PACKAGE_TRANSFER_MANAGER seemed to be perfectly time so perfmon could not plot/sample the peak of each pulse.\nI could have covered fewer seconds and sampled every 1 second and maybe caught the pulses in the graph. But the demonstration was to monitor the behaviour for the configured hour slot. So no, I wanted to look at the results for the last hour.\nFrom the above two log file snippets, I\u0026rsquo;ve learnt that SMS_PACKAGE_TRANSFER_MANAGER:\nDetermines current available bandwidth capacity By knowing the current available bandwidth capacity, calculate when the next pulse will be, while still satisfying the config applied by admin (in my case, 10%) So it\u0026rsquo;s not black and white like I first thought, e.g. flat out for 10% of the hour and quiet for the remainder. From what I\u0026rsquo;ve learnt I appreciate the current behaviour is probably most ideal; it doesn\u0026rsquo;t create prolonged periods of congestion.\nWith the current limitation of pulse mode (256KB block size), using this config could enable you to create a greater average. However, for brief moments of time during the hour you may still saturate bandwidth at either end causing spikes of latency – albeit for a second, not even that.\nNo concurrent transfers to a distribution point while it has any rate limit defined Something you may also not be aware of is the behaviour of SMS_DISTRIBUTION_MANAGER when you configure any rate limit.\nThe documentation doesn\u0026rsquo;t yet say this, and I gave feedback, but when you configure any rate limit – whether pulse mode or the other – only one package will transfer at a time to that distribution point. Anything you had configured in the properties for the Software Distribution Component will be ignored.\nIt\u0026rsquo;s not a huge unknown. If you\u0026rsquo;re experienced then you may have known this already. However I have (at the time of writing this) three years experience and I had no idea. I only discovered it while learning about the “limited to specified maximum transfer per hour” discussed in last section from a great write up here by Chris Nienabar.\nSoftware Distribution Component This brings me on to the two relevant options in the Software Distribution Component.\nWhere are the properties for the Software Distribution Component? Administratrtion \u0026gt; Sites \u0026gt; select your site \u0026gt; Configure Site Components \u0026gt; Software Distribution Components\nThe two key options here are Maximum number of packages and Maximum threads per package. This is what the documentation has to say about it:\nOn the General tab, specify settings that modify how the site server transfers content to its distribution points. When you increase the values you use for concurrent distribution settings, content distribution can use more network bandwidth.\nI don\u0026rsquo;t want to come across like I\u0026rsquo;m hating on Microsoft\u0026rsquo;s documentation because I see this dude\u0026rsquo;s tweets and I can tell hard work goes in to making things as great as they are. But hopefully I\u0026rsquo;ll have an opportunity here to further explain these two options, probably moreso the second than the first:\nMaximum number of packages: The number of packages that can be concurrently push.\nMaximum threads per packages: The number of distribution points that the distribution manager can concurrently push content to.\nThe defaults are 3 and 5, so in effect it means you can transfer 3 packages to 5 distribution points concurrently.\n","date":"2019-02-17T00:00:00+01:00","image":"https://adamcook.io/p/configmgr-throttling-distribution-point-bandwidth/images/cover_hu_9c9017f19bdec3e.jpg","permalink":"https://adamcook.io/p/configmgr-throttling-distribution-point-bandwidth/","title":"ConfigMgr Throttling Distribution Point Bandwidth"},{"content":"I\u0026rsquo;ve recently updated a site to 1806 and was keen to get all clients up to date too. Some machines targeted for the pre-production client were not upgrading. When I looked closer at one client I suspected client health because a hardware inventory had not been submitted in months.\nLittle did I know until recently, a lack of recent hardware inventory data could be a result of other things and client health is least likely to be the root cause. Instead of blindly reinstalling or manually upgrading the client, I wanted to try and understand what\u0026rsquo;s going on.\nIf you\u0026rsquo;re concerned about client health look at Anders Rodland\u0026rsquo;s ConfigMgr Client Health script.\nThere are great articles already out there on this topic, here\u0026rsquo;s some I used to help me with this issue:\nHardware Inventory – In-Depth (Part 1) Updated – Troubleshoot ConfigMgr Hardware Inventory Issues Troubleshooting SCCM ..Part II (Hardware Inventory) Solved: Troubleshooting Hardware Inventory in SCCM | Step By Step Guide From the above I understood the problem could be:\nBroken WMI repository on the client BITS on the client is failing to POST the data to the MP The INVENTORY_DATA_LOADER (?) refusing to handle the MIF because perhaps: bad syntax mismatch in versions too big as it exceeds the MAX FILE SIZE Client side First port of call was InventoryAgent.log on the client. This log file records activity on the client about hardware and software inventory processes and heartbeat discovery. At the start of the cycle you will see the log tell us which action its performing and whether it\u0026rsquo;s a full, delta and resync report.\nAction Guid Hardware inventory {00000000-0000-0000-0000-000000000001} Software inventory {00000000-0000-0000-0000-000000000002} Data Discovery Record (DDR) {00000000-0000-0000-0000-000000000003} File collection {00000000-0000-0000-0000-000000000010} Looking at InventoryAgent.log above you can see:\nThe hardware inventory action took place by looking at the GUID The MajorVersion and MinorVersion; MajorVersion increments with every full or resync report and the MinorVersion increments with every delta report. Full is triggered for the initial report Delta is triggered for each hardware inventory after the full Resync is triggered when either the client recognises there\u0026rsquo;s a mismatch in report versions when comparing what has previously been executed, or when SMS_INVENTORY_DATA_LOADER compares what the client has sent compared to what\u0026rsquo;s in the site database You can manually trigger a delta by running the Hardware Inventory Cycle in the Control Panel applet, however consider running the below if you want to force a full report. Also consider using Recast\u0026rsquo;s Right Click Tools which gives you this option from the console.\nGet-CimInstance -Namespace \u0026#34;root\\ccm\\invagt\u0026#34; -ClassName \u0026#34;InventoryActionStatus\u0026#34; | Where-Object { $_.InventoryActionID -eq \u0026#34;{00000000-0000-0000-0000-000000000001}\u0026#34; } | Remove-CimInstance $InvokeCimMethodSplat = @{ ComputerName = $env:COMPUTERNAME Namespace = \u0026#34;root\\ccm\u0026#34; Class = \u0026#34;SMS_Client\u0026#34; Arguments = @{ TriggerSchedule = \u0026#34;{00000000-0000-0000-0000-000000000001}\u0026#34; } } Invoke-CimMethod @InvokeCimMethodSplat Going back to the problem I was troubleshooting\u0026hellip; I was confident WMI was healthy because I could see various areas of WMI being successfully queried and at the end of the cycle I saw \u0026ldquo;Inventory: Successfully sent report. Destination:mp:MP_HinvEndpoint\u0026hellip;\u0026rdquo;. After seeing that, I figured let\u0026rsquo;s look server side.\nServer side The three key areas server side in this scenario are:\nSMSINVENTORY_DATA_LOADER - dataldr.log Management Point - MP_hinv.log IIS - C:\\inetpub\\logs\\LogFiles - This is the default location for IIS logs, if everything else is default, your SCCM IIS logs should be in a sub folder named W3SVC1. You can find out yourself by looking in IIS Manager. In all three of the above, I could not see any reference of the machine\u0026rsquo;s name, IP or GUID.\nBack to the client Looking back at the client, I could see a big queue of BITS job with mixed states: suspended, queued, error and transient_error. You can view the current BITS jobs using the below commands:\nCMD\nbitsadmin /list /allusers PowerShell\nGet-BitsTransfer -AllUsers Get more information about the reason why they\u0026rsquo;re failing with:\nCMD\nbitsadmin /info JOBID /verbose PowerShell\nGet-BitsTransfer -JobId JOBID | Select-Object * To reset all BITS jobs\u0026hellip; You will notice most of the jobs are owned by SYSTEM or NETWORK SERVICE, of which commands you run in any other context will not be successful at purging. A workaround could be to create a scheduled task that runs as SYSTEM to run either of the below:\nCMD\nbitsadmin /reset /allusers PowerShell\nGet-BitsTransfer -AllUsers | Remove-BitsTransfer However, an easier way posted by Nickolaj Andersen works well:\nnet stop BITS ipconfig /flushdns ren \u0026#34;%ALLUSERSPROFILE%\\Application Data\\Microsoft\\Network\\Downloader\\qmgr0.dat\u0026#34; qmgr0.dat.old ren \u0026#34;%ALLUSERSPROFILE%\\Application Data\\Microsoft\\Network\\Downloader\\qmgr1.dat\u0026#34; qmgr1.dat.old net start BITS For me, my problem was proxy related. I could see error messages referencing timeouts as the issue, I got this by looking at the verbose BITS jobs information.\nAfter fixing that and clearing all jobs, I triggered another full hardware inventory report and quickly saw the BITS job leave the client with no hang ups.\nBack to the server Starting off with the IIS logs I saw successful POST from the client, so data uploaded OK.\nNext, I want see how the management point handles the report. In the below you can see that it successfully receives it in XML form and parses it into a MIF file, ready to then be parsed by the SMS_INVENTORY_DATA_LOADER and submitted in to the site database. Here, I had no issues.\nHowever I had issues with SMS_INVENTORY_DATA_LOADER. Nothing too complicated though, the MIF was just too big than the allowed MIF size. I guess where its a mature server, been in production in a while, accumulated a lot of \u0026ldquo;things\u0026rdquo;, as well as whatever additional hardware classes is configured client settings for hardware inventory. And the fact it\u0026rsquo;s a full report.\nThe below suggests my maximum allowed size is 1 byte, this was true just for the purpose of creating this screenshot.\nOn the primary site server I increased HKLM\\Software\\Microsoft\\SMS\\Components\\SMS_INVENTORY_DATA_LOADER\\Max MIF Size. The decimal value is in bytes so set it to a value at your discretion, your best gauge is see how far over the limit you are by observing the error in dataldr.log and add a little more. Then restart the SMS_INVENTORY_DATA_LOADER component in Configuration Manager Service Manager (Monitoring \u0026gt; Component Status \u0026gt; Right click any component \u0026gt; Start \u0026gt; Configuration Manager Service Manager).\nThen, some time later, the hardware scan timestamp for the device in the console updated and I could see the new data in the device\u0026rsquo;s Resource Manager.\nI felt this was worth writing up because while the links I shared at the start cover most of what I\u0026rsquo;ve written, I wanted to show that BITS is a factor - the other posts don\u0026rsquo;t touch on this - and use it as a note for me in future. You may find you\u0026rsquo;ll only hit one or two of these issues when troubleshooting hardware inventory, unlikely to be all of them in one go like I almost did.\n","date":"2018-10-28T00:00:00+01:00","image":"https://adamcook.io/p/configmgr-hardware-inventory-troubleshooting/images/cover_hu_41198f0248af5117.jpg","permalink":"https://adamcook.io/p/configmgr-hardware-inventory-troubleshooting/","title":"ConfigMgr Hardware Inventory Troubleshooting"},{"content":"For my home lab on a particular VM that runs Ubuntu Server, I\u0026rsquo;m interested to know the public IP of my LAN\u0026rsquo;s gateway and the VM\u0026rsquo;s gateway (as it\u0026rsquo;s always connected to VPN). Below is a script I wrote to pull the WAN IP from my DD-WRT router and compare that to VM\u0026rsquo;s gateway public IP.\nMy copy does other things like start/stop services, configure local applications in my home lab as well as use Pushbullet to notify me if certain conditions are met. This should get you going, hopefully it helps someone.\n#!/bin/bash x=$(wget \u0026#34;http://192.168.0.1/Status_Internet.live.asp\u0026#34; --user=\u0026#34;USERNAME\u0026#34; --password=\u0026#34;PASSWORD\u0026#34; -q -O - | sed -e \u0026#39;s/}/\\n/g\u0026#39; -e \u0026#39;s/{//g\u0026#39;) declare -A results while read line; do IFS=\u0026#39;::\u0026#39; read -ra array \u0026lt;\u0026lt;\u0026lt; \u0026#34;$line\u0026#34; results[${array[0]}]=${array[2]} done\u0026lt;\u0026lt;\u0026lt;\u0026#34;$x\u0026#34; wanip=${results[\u0026#34;wan_ipaddr\u0026#34;]} vmip=$(wget -q -O - https://checkip.amazonaws.com) echo $0\u0026#39;: WAN IP: \u0026#39;$wanip echo $0\u0026#39;: VM IP: \u0026#39;$vmip ","date":"2018-10-27T00:00:00+01:00","image":"https://adamcook.io/p/compare-dd-wrt-wan-ip-to-a-host-public-ip/images/cover_hu_ce78593b74aff356.jpg","permalink":"https://adamcook.io/p/compare-dd-wrt-wan-ip-to-a-host-public-ip/","title":"Compare DD-WRT WAN IP to a Host public IP"},{"content":"In a recent project I had to move the SQL database for a single primary site hierarchy to a different host in order to go beyond version 1610, because support for Windows Server 2008 R2 had ended from 1702 onward.\nEverything was going smoothly up until it came to modifying the SQL configuration of the site. I was getting slapped by ConfigMgrSetup.log with:\nCreate_BackupSQLCert : SQL server failed to backup cert. CSiteControlSetup::SetupCertificateForSSB : Failed to create/backup SQL SSB certificate. ERROR: Failed to set up SQL Server certificate for service broker on \u0026ldquo;sccm.contoso.com\u0026rdquo; .\nAfter a couple of days researching and help from colleagues, I learnt more about MS SQL and SQL certificates.\nI won\u0026rsquo;t overcomplicate this or try to act like I know every minute detail of SQL so I\u0026rsquo;ll cut to the chase.\nDuring installation of SQL, I set a domain service account for the agent, database engine and reporting service. After installation, I changed the service account used for these services. Then I attempted the database migration and stumbled across said error. During my test run in a lab env (as it was my first attempt at SCCM db move) I did not encounter this issue, because I did not play hot potato with service accounts.\nAccording to the links below, internal SQL certificates are generated with a dependency on the database\u0026rsquo;s master key. Changing the domain user to run the service meant that its service key could no longer access or decrypt the database\u0026rsquo;s master key.\nThere are methods to recreate the database master keys detailed in the links below, however after initially trying (and failing) to use the stored procedure spCreateandBackupSQLCert, I figured a reinstall of a new SQL instance using the desired service account would do the trick. The penny dropped when I read the below from here:\nThe history was the SQL server was installed with the system account and then later changed to a domain user account.\nThe problem with doing the above is that when Configuration Manager is installed it creates some internal certificates which are dependent on the master key. When the account being used to run the database server changes the new account is no longer able to \u0026lsquo;unlock\u0026rsquo; the master key and consequently can not read the internal certificates which then cause communication between sites to fail.\nIn case you didn\u0026rsquo;t know, because I didn\u0026rsquo;t, SQL certificates are different to the certificates you may usually deal with in a user\u0026rsquo;s, computer\u0026rsquo;s or service\u0026rsquo;s personal / trusted root store from the Certificates snap in.\nResources I used to help me understand SQL certificates:\nhttps://blogs.technet.microsoft.com/umairkhan/2013/12/12/configmgr-2012-drs-and-sql-service-broker-certificate-issues/ http://www.sqlservercentral.com/articles/Encryption/108693/ https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/encryption-hierarchy?view=sql-server-2017 Resources I used to help me with the database move:\nhttps://blogs.technet.microsoft.com/configurationmgr/2013/04/02/how-to-move-the-configmgr-2012-site-database-to-a-new-sql-server/ https://deploymentresearch.com/Research/Post/646/Moving-the-ConfigMgr-Current-Branch-database-to-another-server-as-in-back-to-the-primary-site-server ","date":"2018-10-15T00:00:00+01:00","image":"https://adamcook.io/p/configmgr-database-move-failed-to-create/backup-sql-ssb-certificate/images/cover_hu_394c4a67c452a5cf.jpg","permalink":"https://adamcook.io/p/configmgr-database-move-failed-to-create/backup-sql-ssb-certificate/","title":"ConfigMgr Database Move: Failed to Create/Backup SQL SSB Certificate"}]