Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Toshimo
Aug 23, 2012

He's outta line...

But he's right!

mystes posted:

You need to run ShowDialog in a separate thread using Start-Job or something but it's kind of annoying in powershell.

It's usually easier to just use c# once you're doing gui stuff.

Thanks, I'll give it a shot. I don't really have the excess time and bandwidth to develop apps in another language, so I'm kinda at the point where if I can't make something work with the tools I've got, I'm just gonna not bother.

Adbot
ADBOT LOVES YOU

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
So, every 3 weeks I have to change the passwords on a few hundred test accounts and afterwards, the services running as those accounts have to be changed as well.

I wrote a script to go out to the associated servers and change the passwords on the services and it 100% works as-advertised except... the services tend to revert to the old password after reboot.

Is there something I'm missing here to make this persistent?

code:
gwmi -NameSpace "root\CIMV2" -Class "Win32_Service" -ComputerName $Computer_Name -Filter "Name LIKE '0%Controller'" | % { $_stopservice() }
gwmi -NameSpace "root\CIMV2" -Class "Win32_Service" -ComputerName $Computer_Name -Filter "Name LIKE '0%Controller'" | % { $_.change($null,$null,$null,$null,$null,$null, "DOMAIN\$($_.Name -match '_(\d)_' | % { $Matches[1] })", $New_Password, $null, $null, $null ) }
gwmi -NameSpace "root\CIMV2" -Class "Win32_Service" -ComputerName $Computer_Name -Filter "Name LIKE '0%Controller'" | % { $_.startservice() }

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

Pile Of Garbage posted:

What return value are you getting when the Change method of the Win32_Service WMI class is invoked? Going off the doco your usage looks OK however it's a bit vague as to whether the StartName parameter is always required. In the examples section it shows changing a password by specifying only the StartPassword parameter so maybe try it without the StartName parameter (e.g. $_.change($null,$null,$null,$null,$null,$null, $null, $New_Password, $null, $null, $null))?

Alternatively you may want to look at using the Set-Service cmdlet instead of WMI (Assuming your fleet has the required PS version).

I'll check what's getting returned when I get back to the office on Friday. I'm using wmi instead of Set-Service because they removed Set-Service's remote capability in v6 and later so you have to get funky with Invoke-Command to do that now.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

sloshmonger posted:

Assuming that your AD environment will support it, this is exactly the case for a Managed Service Account, or Group Manage Service Account, was made for, unless you're also using those accounts to log in as or do other manual entries.

I'm not any help as to the issue you raised, though.

Yes, these are test accounts that are also logged into manually, and also used to run automated testing. So, they're simultaneously:
  1. AD accounts that can be logged into on any machine in the lab.
  2. The accounts these Jenkins services run as.
  3. The accounts they log into the mainframe as.
  4. The accounts the Selenium scripts run under.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

New Yorp New Yorp posted:

Have you considered automating the provisioning and configuration of these machines with tools like packer and puppet? Manually rotating passwords is kind of an antiquated approach.

I don't get to make that call. The Agency mainframe requirement is password change every 28 days and we have to update AD to stay in sync. It would take a literal Act of Congress to change that, I suspect.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

adaz posted:

I really should update the OP with all this, a lot has changed since I started this thread.

You could always.... start a new thread.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

CzarChasm posted:

Working on a different but related script. I'm sure I'm doing this rear end backwards, but I can't seem to google the correct terms to do what I want.

In short, I need a counter that starts at a specific number and increments by 1 each time an email is processed. So, if I start my counter at 100, and process 15 emails, the counter should be at 115 at the end, and then the next time the program is run, it starts at 115. I have to start it at a specific value because this program is going to be a continuation of another program that has been running for almost 2 years now, and the counter is used as an identifier.

My first thought is to start with $counter = 100 at the start of the program, but of course, every time the program is started fresh, that counter is going to go back to 100 again.

The way I've set this in the program is to have a single line text file that starts my counter. One of the first lines I have is to read the value from this text file, and put it into the variable. Then after the processing is done, the last thing it will do before ending is to take the incremented value and write it back out to the same text file. This works, but my concern is that if my program crashes at any point before the counter is written back to the text file, my count is going to be off, and that could lead to big problems. I could add the line to write the value back to the text file immediately after the counter is incremented, and in that case it would be talking maybe one duplicate rather than a whole days worth.

But I still feel that I must be missing something obvious.

Use a registry key?

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

BeastOfExmoor posted:

I am running the following if statement to figure out if $Destfilename both exists and also is not equal for $File.Name in a ForEach loop. It works most of the time, but I noticed an issue when the filename(s) contains a '!' character. I'm assuming this is a regex issue, but in all my searching I'm not seeing a way to ignore regex and compare string variables as literal?

code:
    if($Destfilename -and $File.Name -ne $DestFilename) {

I do not understand.

When you use if($Destfilename), you are only checking to see if the variable exists. If you want to see if the file exists, you should be using Test-Path.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
I tried to mimic what I think your intent is, but don't see any issue.

code:
$working_dir = "C:\Users\toshimo\Documents\PowerShell\"
$File1 = "Test1.txt"
$File2 = "Test1!.txt"

$Destfilename = $File1
$File = Get-ChildItem ($working_dir + $File1)
if((Test-Path ($working_dir + $Destfilename)) -and ($File.Name -ne $DestFilename)) { echo "No Match" } else { echo "Match" }

$Destfilename = $File2
$File = Get-ChildItem ($working_dir + $File2)
if((Test-Path ($working_dir + $Destfilename)) -and ($File.Name -ne $DestFilename)) { echo "No Match" } else { echo "Match" }

$Destfilename = $File1
$File = Get-ChildItem ($working_dir + $File2)
if((Test-Path ($working_dir + $Destfilename)) -and ($File.Name -ne $DestFilename)) { echo "No Match" } else { echo "Match" }

$Destfilename = $File2
$File = Get-ChildItem ($working_dir + $File1)
if((Test-Path ($working_dir + $Destfilename)) -and ($File.Name -ne $DestFilename)) { echo "No Match" } else { echo "Match" }


    Directory: C:\Users\toshimo\Documents\PowerShell


Mode                LastWriteTime         Length Name                                                                                                                                                                                                 
----                -------------         ------ ----                                                                                                                                                                                                 
-a----        3/23/2020   1:34 AM              8 Test1!.txt                                                                                                                                                                                           
-a----        3/23/2020   1:32 AM              8 Test1.txt                                                                                                                                                                                            
Match
Match
No Match
No Match

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
I'd try making a PSCustomObject:
code:
$mbxStats = @()

$mailbox_list = Get-Mailbox -ResultSize Unlimited 

ForEach ($mailbox in $mailbox_list) {
    $mail_stats = Get-MailboxStatistics -id $mailbox
    user  = Get-User -id $mailbox

    $userObject = [PSCustomObject]@{
            id                = ''                     # You may omit this if your database doesn't need it.
            DisplayName       = $stats.Displayname
            Department        = $user.Department
            OU                = $user.OrganizationalUnit
            LastLogonTime     = $stats.LastLogonTime
        }

    Write-SqlTableData -Credential $dbCredential -ServerInstance db01 -Database email_stats -SchemaName "dbo" -TableName LastLogon -InputData $userObject
}

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

Pile Of Garbage posted:

The response body is just a string so you can then parse and deserialise as required.

Can't wait to parse some HTML with regex.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

Buddy...

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
Trying to set something up for an installer and this is bugging the hell out of me.

CODE:
code:
$MSIResultCodes = (
    @{"0"="SUCCESS: Task Successful"},
    @{"1605"="SUCCESS: Task Successful - App not Installed"},
    @{"1618"="Failure: Another install in Progress"},
    @{"3010"="SUCCESS: Task Successful - Reboot Required"}
)

$ExitCode="1605"

$MSIResultCodes.$ExitCode
Works perfectly. Except if I turn strict mode on and get a PropertyNotFoundException for the last line.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

LegoMan posted:

I've been tasked with writing a script (that I already wrote in C but corporate security rules required us to shift to Powershell, don't ask) that removes the tedium of renaming a large number of files after a work activity.

I have code set up to pull the latest file from an SD Card (old rear end equipment) and rename it to a certain name. Now I want to take that file and copy/rename it 21 times (or however many pieces of equipment a certain factory has which I've set up as a variable at the top of the script so every facility can have it tailored to them)

$arm_path = "c:\agv\arm"
$disk_path = "c:\agv\disk"
$vis_path = "c:\agv\vis"
$agv_num = 21

These are values I'm using that can be changed.

I want to take the file name say "ARM01.dat" and rename it 02, 03, 04, etc until it reaches the agvnum value. I know I should use a for loop but having only been using Powershell for three days I don't know enough syntax to chop off the end two digits and replace them with the value.
Thanks

Lot of questions here:
  1. Do you care what the original filename is?
  2. Do you need the "latest" file by time or "latest" file by number?
  3. Does the original filename dictate the ending files' names?

An example here does what you are asking, but doesn't make an effort to do that style of string replacement, it just decides the ending format is going to be "ARM##.DAT" and makes files accordingly. If you need something to do actual string replacement (because your filenames may change and you'll need to change the output), let me know and I'll mock that up, too.

code:
Set-StrictMode -Version Latest

Start-Transcript -Path "C:\agv\logs\AGV-Copy-$(get-date -UFormat %Y%m%d%H%M%S).log" -NoClobber

$arm_path = "C:\agv\arm"		#Assuming this is your source directory
$disk_path = "c:\agv\disk"		#Assuming this is your destination directory
$vis_path = "c:\agv\vis"		#IDK what this is.
$agv_num = 21

if( -not (Test-Path $arm_path)) {
    Write-Host "Source Directory not found at $arm_path"
    break
}

if( -not (Test-Path $disk_path)) {
    Write-Host "Disk Directory not found at $disk_path"
    break
}

if( -not (Test-Path $vis_path)) {
    Write-Host "Vis Directory not found at $vis_path"
    break
}

$latest_source = Get-ChildItem -Path $arm_path -Filter "ARM*.dat" | Sort -Descending LastWriteTime | Select -First 1

if( -not ($latest_source)) {
    Write-Host "No Valid Source File Found."
    break
}

Write-Host "Duplicating $($latest_source.Name) $agv_num times..."

foreach( $iteration in 1..$agv_num) {
    $pad_length = ([string]$agv_num).Length
    Copy-Item "$($arm_path)\$($latest_source)" -Destination "$($disk_path)\ARM$(([string]$iteration).PadLeft($pad_length,"0")).dat" -Force
    Write-Host "$($disk_path)\ARM$(([string]$iteration).PadLeft($pad_length,"0")).dat created."
}

Stop-Transcript

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
Ok, cool. This should have everything, then.
code:
Set-StrictMode -Version Latest

Start-Transcript -Path "C:\agv\logs\AGV-Copy-$(get-date -UFormat %Y%m%d%H%M%S).log" -NoClobber

$arm_path = "C:\agv\arm"
$disk_path = "c:\agv\disk"
$vis_path = "c:\agv\vis"
$agv_num = 21

$source_prefix = "ARM"
$output_prefix = $arm_path + "\M3AGVARM"
$output_suffix = ".dat"

if( -not (Test-Path $arm_path)) {
    Write-Host "Arm Directory not found at $arm_path"
    break
}

if( -not (Test-Path $disk_path)) {
    Write-Host "Source Directory not found at $disk_path"
    break
}

if( -not (Test-Path $vis_path)) {
    Write-Host "Vis Directory not found at $vis_path"
    break
}

$latest_source = Get-ChildItem -Path $disk_path -Filter "*$($source_prefix)*.dat" | where-object { -not $_.PSIsContainer } | Sort -Descending LastWriteTime | Select -First 1

if( -not ($latest_source)) {
    Write-Host "No Valid Source File Found."
    break
}

Write-Host "Duplicating $($latest_source.Name) $agv_num times..."

foreach( $iteration in 1..$agv_num) {
    $pad_length = ([string]$agv_num).Length
    $source_path = $disk_path + "\" + $latest_source
    $output_path = $output_prefix + ([string]$iteration).PadLeft($pad_length,"0") + $output_suffix
    Copy-Item $source_path -Destination $output_path -Force
    Write-Host "$($output_path) created."
}

Stop-Transcript

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

LegoMan posted:

code:
if( -not ($latest_source)) {
    Write-Host "No Valid Source File Found."
    break
}
This is causing an unhandled exception (I added it from what you posted because as stated before if there's no file there it will continue happily creating things anyways)

I tried break, exit

is there something I can put it to just start at the beginning?

Post your error message. You have done something very wrong.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

LegoMan posted:

even better I'll post the function

code:
Function viscopy (){
    $source_prefix = "VIS"                                         #Filter condition that only looks at arm files (in case there is a more recent visual data file)
    $output_prefix = $vis_path + "\M3VIS"                       #change to name of AGV arm file -AGV number
    $output_suffix = ".dat"
    $latest_source = Get-ChildItem -Path $disk_path -Filter "*$($source_prefix)*.dat" | where-object { -not $_.PSIsContainer } | Sort -Descending LastWriteTime | Select -First 1        #grabs latest file off disk based on filter above. Removes need for specifying which AGV used for teach
    if( -not ($latest_source)) {
        Write-Host "No Valid Source File Found."
        break}
    Write-Host "Duplicating $($latest_source.Name) $agv_num times..."
        foreach( $iteration in 1..$agv_num) {
    $pad_length = ([string]$agv_num).Length
    $source_path = $disk_path + "\" + $latest_source                                                                                                                                     #begin section for copying AGV file for every AGV
    $output_path = $output_prefix + ([string]$iteration).PadLeft($pad_length,"0") + $output_suffix
    Copy-Item $source_path -Destination $output_path -Force
    Write-Host "$($output_path) created."}}
It worked beautifully until I put in the check to see if $latest_source actually had a value. ( if( -not ($latest_source)) )

I tried this with just dropping some variables at the bottom and running the function and it worked as-is. I'd need to see your error message to know more.

code:
PS C:\tmp> Function viscopy (){
    $source_prefix = "VIS"                                         #Filter condition that only looks at arm files (in case there is a more recent visual data file)
    $output_prefix = $vis_path + "\M3VIS"                       #change to name of AGV arm file -AGV number
    $output_suffix = ".dat"
    $latest_source = Get-ChildItem -Path $disk_path -Filter "*$($source_prefix)*.dat" | where-object { -not $_.PSIsContainer } | Sort -Descending LastWriteTime | Select -First 1        #grabs latest file off disk based on filter above. Removes need for specifying which AGV used for teach
    if( -not ($latest_source)) {
        Write-Host "No Valid Source File Found."
        break}
    Write-Host "Duplicating $($latest_source.Name) $agv_num times..."
        foreach( $iteration in 1..$agv_num) {
    $pad_length = ([string]$agv_num).Length
    $source_path = $disk_path + "" + $latest_source                                                                                                                                     #begin section for copying AGV file for every AGV
    $output_path = $output_prefix + ([string]$iteration).PadLeft($pad_length,"0") + $output_suffix
    Copy-Item $source_path -Destination $output_path -Force
    Write-Host "$($output_path) created."}}

$diskpath = "C:\tmp"
$vis_path = "C:\tmp"
$agv_num = 3

viscopy
Duplicating VIS.dat 3 times...
C:\tmp\M3VIS1.dat created.
C:\tmp\M3VIS2.dat created.
C:\tmp\M3VIS3.dat created.

PS C:\tmp> 

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
Try replacing your "break" with a "return(0)" while I try and type something more meaningful up.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
So, my understanding of it, and I'm sure someone with better knowledge will come in and correct all the dumb things I am about to say, is this:

Once you start adding the GUI stuff on top, some of the typical PowerShell flow control bits like "break" and "exit" aren't going to work because the GUI wrapper expects a response from each action and bouncing out that way just returns NULL which it doesn't like.

So, you have to be careful with your flow control and use return, particularly with a return value, when possible.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
I had to add some fields to the end of a section inside an old-school .INI file today and I wrote this code with equal parts shame and pride:

code:
$payload = foo`n"

switch((Get-Content -path $current_config_file -Raw).indexOf('[', (Get-Content -path $current_config_file -Raw).indexOf('[bar]')+1)) {

               -1 { $payload | Out-File $current_config_file -Append }

               Default { (Get-Content -Path $current_config_file -Raw).Insert($_,$payload) | Set-Content $current_config_file }

}

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

FISHMANPET posted:

Man, you are getting the content of that file way too many times.

Definitely, yeah. But it's inconsequential, so I didn't worry about it.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
Why would someone do this?

code:

$some_list = "foo", "bar", "baz" -split"," | ?{$_}

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

mystes posted:

The last part will remove empty strings. You're splitting each input string on "," flattening the output, and then removing empty strings.

Edit: Or if you mean "why would someone write code like this" rather than "what does it do" then I guess the answer is "lol powershell" :shrug:?

Yeah, I mean they literally did that on a line to feed it into a foreach loop instead of:

$somelist = ("foo","bar","baz")

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

Zaepho posted:

Copy Pasta. Probably got some examples from different places to do what they thought they needed to do and just altered it to fit rather than understanding the right way to do it.

Yeah, their entire script is full of choice tidbits like this, which is why someone brought it to me to decipher. It'll probably get rubber stamped to go to prod anyway, but at least someone is asking questions.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
ffmpeg does weird poo poo and kinda wants some sort of array option as input sometimes. Idk if that's the solution here, but try changing up your command so that you're sending:

ffmpeg -i $("01 - A Christmas Festival Medley.flac") "01 - A Christmas Festival Medley.flac.Basename.aiff"

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
Also, I'd recommend not starting ffmpeg in that fashion. I'd use:

Start-Process -FilePath c:\path\to\ffmpeg.exe -ArgumentList $ArgList -Wait -NoNewWindow;

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
Ok, I've been basking my face into this for hours, and I'm trying to do a registry import "the PowerShell way" using New-ItemProperty, and it's all fine and normal until I try to make a Binary value. Then the casting makes it all go to poo poo.

I've got something that's originally like
"Data"=hex:01,00,02,03

And I've tried pulling it out via PowerShell (I get an array of integers representing the hex values). Trying to just dump that back in with PropertyType Binary and it just silently throws it in as a string type instead.

I tried using [byte[]](0x01,0x00,0x02,0x03) and that either does the same (or fails the casting, I don't really recall, I've been through dozens of iterations).

Anyone have any idea how to get this translated before I give up and just call the old-school command line?

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

nielsm posted:

This works for me:
code:
$newPropertyValue = [byte[]](1,2,3,4,5)
Set-ItemProperty -Path . -Name "test" -Value $newPropertyValue -Type Binary

Ah, gently caress. I had the drat $newPropertyValue equivalent wrapped in quotes.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
But, yes, thank you. Just sanity checking me forced me double scrutinize what the hell I was doing because I knew how to do it right and just flubbed the tiniest bit of syntax.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
So, this may sound dumb, and that's probably because it is, but uhhh let me kinda describe what I do and what I'm thinking about doing about that, so please give me enough room to hang myself before you bring out the pitchforks.

My team's sole focus is deploying install scripts through CM for enterprise deployment and we are like 95%+ PowerShell, with a tiny fraction of MSIs (I've done like 1 or 2 MSIs in my first year).

We have 5 people on the team and 1 shared "Standard" library that we include with like... sets up a few variables and logging, and has a handful of poorly documented "common" functions. Otherwise, I'm just sort of left to code each new thing from scratch, in part because this team moved over from WISE Scripting a few years back and a lot of the code from the 1st couple of years of PowerShell has not... aged gracefully.

All our applications are stored on a big ol' network share, scripts and payloads and everything, for posterity. I'm free to paw through and use anything I like, but that's largely dependant on having the institutional knowledge to know how each app was done to find stuff for reuse, and again, a lot of the older stuff isn't great.

My working theory is to beg a bitbucket setup off the team that manages that (even though it's not really supposed to be used for this), and at least getting all our PS code up into it, so we've got something reasonably searchable so maybe there's a little more standardization and a little less reinventing the wheel.

I won't be able to put all the payload files up there, I expect, but maybe I can also export all the applications from CM and throw the XML up there as well (we've really only just started actually exporting and removing old obsolete CM applications, we had ~7 years of Everything Ever Written still clogging up CM before Microsoft put their foot down and told us the servers were keeling over and they weren't going to be able to support our infrastructure if we kept doing that).

Does any of this sound logical? I've got a lot of management and team goodwill I can burn, but effectively a $0 budget, and probably a million security restrictions, but I can't see us all just operating our 5 little independent code fiefdoms forever.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

The Fool posted:

Get stuff organized into repos ASAP. If your org is already on bitbucket it’s probably fine but Azure devops is free for 5 users and you’ll get some lightweight project management, repos, and pipelines.

Azure devops pipelines work amazingly well as task runners.


Thanks.

I know like... 1 of those words, but I'll start looking into it. I'm here for the next 20ish years, most likely, so I've got time to grow it. I don't think we're going to get any MS stuff for free, even if only my team of 5 wanted to use it, though. But if I figure out what I'm doing, I can probably get any MS thing eventually, since we've got like a literal billion dollar MS contract.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
Yeah, like today I wanted to stop a service in a script and I was like... I think I remember doing this before but idk which script it was in. So, I had to think it all out and do it all in steps because you can't just "stop a service".

It's:
  1. Verify the service exists.
  2. Bail if it doesn't.
  3. Verify the service is running.
  4. Log that you are stopping the service.
  5. Set the wait duration.
  6. Stop the service as a job.
  7. Cycle a while loop every second to check if it's still running.
  8. Bail and Log if the job fails.
  9. After the wait duration elapses, check if the service is running.
  10. If it is, log and bail.
  11. If it's good and stopped, log and continue.

And then do it all in reverse at the end of the script to start the service again.

I set myself a reminder to pull that code out and save it in my snippets folder at the end of the week (I've been in PowerBI training all week so, my bandwidth until Friday is limited or I'd do it while it's fresh).

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

Zaepho posted:

I'm going to address just this bit here. If y'all aren't using https://github.com/PSAppDeployToolkit/PSAppDeployToolkit you should really take a look at it. It may help with a ton of the common library stuff. It also helps encourage building uninstalls alongside the installs.

We absolutely are not, and I'll take a look at it, thanks.

Part of the problem with moving the group wholesale to any new platform at this point is that we're very beholden to a lot of legacy logging requirements (self-inflicted, not mandated by management), so everything has to log "just so". Unfortunately, each team member does it ~just a little differently~ and even vary it up by script, which is part of why I want to standardize. I figure it's probably a three-step process at this point (for that particular thing): Step 1 is getting all our shared code synched up; Step 2 is to start finding a library system to move to so we aren't writing it ourselves all the time; Step 3 is to migrate our logging to use something compatible with the new system.

A good example of a basic problem I'm trying to fix:

  • Our logging writes to ~4 separate log files of varying verbosity, by default, with some scripts adding more. this is in addition to any CM logs generated.
  • These 4 log files are written for different audiences and obscured away at different levels based on technical competency of the target audience.
  • One of the log files, the most public-facing, and the one we recommend end users and low-tier techs check, is written at script exit with an extension based on script function and status, replacing all previous logs of that type, regardless of extension.
  • The typical extensions are .S (successful install), .F (Any failure), .U (successful uninstall), .I (script incomplete, probably pending reboot).
  • Buuuuuttttt, there's no standard for what these logs contain. Some of the are 0-byte empty files, some include return codes, some include timestamps, some have human-readable summaries, it entirely varies not only by who wrote it, but how they were feeling that day.

I was asked during QC on a script by our senior team member why I had a line at the end of my script that returned a slightly verbose text description with my return code stone point, and I replied that I thought it was good practice to put something in the short, public-facing log that would be informative to the casual observer, and that I was a bit concerned that a lot of them were empty or just a return code. He replied with "I don't know the last time I ever looked at one of those short logs, we just go straight to the verbose ones". I told him that we had 100,000 users and that there were 5 of us, so I thought it was important for the other 99,995 people who would look at a log to have something meaningful, especially if they were reporting a problem to the help desk. He's a pretty good guy, and said he'd look into updating our standard template to have a more standardized default message in the short log, thankfully, but it's very much just one thing on the Big Pile of Cruft That Needs Addressing.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
I mean, I count myself thankful that we do do a bunch of logging, and QC, and testing, and a number of other helpful things, even if we aren't exactly at Best Practices level, yet. And that the team and management are both pretty receptive to change (although preferably incremental).

And sometimes the stuff catches me doing stuff that I can do better, even if it's Not Wrong. Like, I had a recent script where I was removing some item properties, and I just wanted them gone, didn't really care if the item had the properties in the first place. So, I just Removed them, and let PS catch the exception. Not So Fast. Even though I was catching the exception and even though it didn't matter that I was trying to remove something that didn't exist, it started bloating up the super verbose transcript log with Informative Error Messages. So, I just added a quick check for existence, and now my logs are clean. I guess it's Technically More Correct this way, even if there's no actual practical difference, but I'd rather get nudged towards being a little more meticulous every now and then, if it also catches me when I Legit gently caress Up.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

FISHMANPET posted:

If my "CM" you mean MEMCM (formerly but not actually SCCM) aka ConfigMgr, then you can also log in a format that CMTrace will understand, because odds are good you'll be comfortable with it, and you know it will be there if you're troubleshooting deployments.

I will deliver to you the unfortunate news that writing code that returns meaningful data to SCCM/MEMCM/ConfigMgr has at some point been dismissed as an option because "it was just not actually reading the return codes anyway" or something. It's another thing on the pile for me to investigate.

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
Hell, yeah, my dudes. Powershell has been added to SA's bbcode: [code=Powershell]

PowerShell code:
$Credential = Get-Credential
$PCName = Get-Content "C:\path\to\file\remotecomputers.txt"

foreach ($name in $PCName) {
    $prog = Get-WmiObject Win32_product -ComputerName $name -Credential $Credential | Where-Object Name -eq "ProgramX"
    if ($prog) {
        Write-Host "Found ProgramX install on $name"
        Invoke-Command -ComputerName $name -Credential $Credential -ScriptBlock {Start-Process cmd.exe -ArgumentList {"/c C:\windows\ProgramX\programx.exe /uninstall" };
            Start-Process -wait cmd.exe -ArgumentList {"/c RD /S /Q `"%WinDir%\System32\GroupPolicyUsers`""};
            Start-Process -wait cmd.exe -ArgumentList {"/c RD /S /Q `"%WinDir%\System32\GroupPolicy`""};
            Start-Process -wait cmd.exe -ArgumentList {"/c gpupdate /force"}
        }
        Write-Host "Finished process on $name"
    }
}

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

lol internet. posted:

I made a script to do AD account management (ie. disable, move, clear fields.)

How can I go about creating logs? Ideally I want to have a log file I just write to, commands that got sent and if it was successful or not.

Also how would I send an email report saying which users were disabled. I am using a foreach loop to disable the accounts.

I'd start by throwing in Start-Transcript (https://ss64.com/ps/start-transcript.html) at the start of your script and see if that gets you where you need to be. You can invest a lot of time manually catching errors and logging actions, but is it worth it in the long run for this purpose?

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
So, I'm trying to introduce the team at work to concepts like "functions" and "not writing things whole cloth every week that are impossible to QC because you aren't reusing code and nobody knows what you are trying to do". And it's an adventure because idk what I'm really doing and so I'm just looking for pointers on how to Best Practice this stuff. The team is fairly receptive to changes I've previously proposed, and they've stated interest in standardizing things going forward, but they've got even less idea what that would look like than I do, and are largely content to just winging it for another decade, unless someone sets the direction for them.

So, like, here's an example of one I wrote up this morning. Common piece of code we do is stopping services before an install, but standardizing it into a robust piece with logging so they can just yeet stuff at it and it will Just Work but also make logs, would cut down on time spent and also make it easier to QC.

Idk if anyone has any thoughts about whether I'm approaching it the right way, or if there's a better way to handle it. Really open to any suggestions.


PowerShell code:
function CSSDT-Stop-Service {
<#
	.SYNOPSIS
			A function that stops a service passed via pipeline
	.PARAMETERS
			 $service - Pipeline passed list of services
			 $Timeout - Timeout to wait for job completion.  Default=30s
	.RETURNS
			ExitCode: [String] True if service not present or if service present and successfully stopped.  Otherwise false.
			ExitMessage: [String] Interpreted error message.
	.USAGE
			 $ServiceResult = Get-Service "Winzip*" | CSSDT-Stop-Service
			 if(-not $ServiceResult.ExitCode) { LogLine "Error: [$($ServiceResult.ExitMessage)]" }
#>

	[CmdletBinding()]

	Param(
		[Parameter(Mandatory=$true,
		ValueFromPipeline=$true)]
			$service,
		[Parameter()]
			[int]$Timeout=30
	)

	process {
		$jobs = $Null
		$ExitCode = $True
		$ExitMessage = ""

		ForEach($current_service in $service) {
			echo "Stopping Service $($current_service.name)"
			$jobs += Start-Job -ScriptBlock {Stop-Service $input.Name } -inputObject $current_service
		}

		echo "Waiting up to $($Timeout) seconds for jobs to process..."
		$jobs | Wait-Job -Timeout -$Timeout | Out-Null

		ForEach($current_job in $(Get-Job $jobs)) {

			switch ($current_job.State) {
				"NotStarted" { $Exitcode=$False; $ExitMessage+="Error: Job not Started ($($current_job.Name)"}
				"Running" { $Exitcode=$False; $ExitMessage+="Error: Job Timed Out ($($current_job.Name)"}
				"Failed" { $Exitcode=$False; $ExitMessage+="Error: Failed ($($current_job.Name)"}
				"Blocked" { $Exitcode=$False; $ExitMessage+="Error: Job Blocked ($($current_job.Name)"}
			}

			if($current_job.Error) {
				$ExitCode = $False
				$ExitMessage = "Error: Stop-Service command returned error: $($current_job.error)"
			}
		}

		ForEach($current_service in $service) {
			if((Get-Service $current_service).Status -eq "Running") {
				$ExitCode = $False
				$ExitMessage = "Error: Post-check Failed.  Service $($current_service.Name) still running..."
			}
		}

		if($ExitCode) {
			echo "Service stopped..."
		} else {
			echo "Error: Service stop failure.  Exit Message [$($ExitMessage)]"
		}

		return @{ExitCode=$ExitCode; ExitMessage=$ExitMessage}
	}
}
PS: The echo calls are actually calls to an internal logging function that I've replaced for readability.

Toshimo fucked around with this message at 00:28 on Jun 9, 2022

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

nielsm posted:

First, check what's going on with your copy-paste. All the line breaks are double on the forums, and the indentation looks uneven, especially in the documentation block. (Also, 16 spaces indentation is weird too.)

Yeah, something went bad copy/pasting it from my email on mobile (also I need to remind the Awful.apk devs that Powershell is a supported language now).

nielsm posted:

As far as I can tell from reading the code, if you send in services by the pipeline, they're going to be processed sequentially with waiting for the timeout for each individual service. It would run the timeouts in parallel only if you passed the services as an array in a single parameter.
If your intention is to process the timeout for all the service stopping in parallel then you'd have to only start those jobs in the Process{} block, and then do all the remaining work in the End{} block, which runs after the pipeline input to the cmdlet has finished.

The basics of CmdletBinding and functions with Process{} that take input from the pipeline, is that the Process{} block is an implicit loop over each object on the pipeline, so treat it as that.
The exception to that is if the type of the pipeline parameter allows array types and you pass the input object as an array as a parameter instead of through pipeline, then the Process{} block runs once with the array as input, and in that case you'd have to do an internal foreach loop over that.
So what I want to say is, if you want to allow passing an array of services as a parameter and have it work, then keep the foreach loop inside the Process{} block too.

Yeah, I wasn't familiar with any of this, so thanks, I'll definitely invest the time into updating my stuff this way, as it'll be a lot more useful. I'm basically working from scratch and I just don't know the things I don't know.

nielsm posted:

Your return values are odd.
The best way to return objects to the pipeline is actually to not use the return statement but just leave them to output.
Additionally, you should normally not return a hashtable, but use the [PSCustomObject] pattern to convert hashtables to objects that behave nicer and work with the standard formatting and output cmdlets.

This, too. It's very much a "I got things to work by doing this, so this is the way I know to do it".

nielsm posted:

Consider the standard PowerShell function naming scheme, "Verb-Noun". Stuffing extra terms in front of the verb is bad practice.

Yes, I do know this is non-standard, but because I'm working with a group who doesn't have a deep familiarity, I'm prepending everything that is internal with the name of our team so that they can readily identify the inhouse pieces from the standard stuff.


nielsm posted:

[code=powershell]

Thanks for this, I'll see about getting things update in the morning.

nielsm posted:

The entire way of returning the status seems a little odd to me, but I'm not sure if you intend to connect it with other things on pipeline that want to consume the output.
It's often a good idea to try to structure data all the way through, including returns, rather than formatting for text display only.

I don't really have a process in place, so no, I wasn't planning on doing other stuff with this. My primary goal is to just start reducing the overhead of a lot of our common stuff to shave off errors and QC time, and then I can revisit it later for better functionality once we've got some breathing room and some more people who know what's up.

Adbot
ADBOT LOVES YOU

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
Yeah, my shop is weird (or maybe not) because:
  • 95% of what we write is installation wrappers.
  • Half the team is old dudes who did WiseScript for like a decade and 1 of them is now mostly MSIs and does Powershell only when neccessary.
  • We have exactly 1 piece of shared code, which is a single include file that just sets up logging and populates a bunch of environment variables.
  • We write a bunch of small stuff on short notice that often gets yeeted out to prod for 5+ years and is never touched again. (We don't patch code)
  • Everything has to work the first time, every time, because if a script fails 1% of the time, that's a loss larger than what any of us make in a year.
  • We don't have any source control.
  • We QC each others' stuff, but I'm not convinced any of us really knows wtf is going on with each others' code more than 50% of the time, because we have to jam in a bunch of edge case poo poo and documentation is often non-existent.

It's weird, and not conducive to learning or improvement, but at least it's a bunch of good eggs who are willing to accept new things and improve, so if I can fix me, I can port that stuff over to everyone.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply