Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
I tried really hard to do a good gui in powershell, it's a nightmare and ends up looking like crap.

Adbot
ADBOT LOVES YOU

nielsm
Jun 1, 2009



Implement your GUI as a .NET class library and import that into Powershell.

The Fool
Oct 16, 2003


I had already started the project using https://www.poshgui.com so I just finished it up with what I had.

Thankfully I was just adding a GUI wrapper to a pre-existing script and only needed to do 1 form with a couple browse dialogs and go! button.

It was still a total nightmare, and if I find myself needing to do this in the future I am for sure either redoing the entire thing in C#, or doing a GUI launcher in C#.

The Fool
Oct 16, 2003


Double posting because I have a question about improving the performance of a different project.


We have a system that generates to separate CSV files. These CSV's are related, but contain different data. All of the data generated is tied to a vendor ID number.

In order to eliminate some workload on the finance team, I wrote a script that merges these to files.

The tricky part is that the source files can have multiple lines for the same vendor that contains different data and that data needs to be consolidated into one line for the final file.

This all works, but it takes about 5 minutes to run on two files with 2500 records.

The biggest bottleneck is where I check to see if a vendor already has data in the the array using Where-Object.

The function:
code:
function Get-VoucherLine($voucherCSV, $vendor)
{
    $line = $voucherCSV | Where-Object {$_."VENDOR" -eq $vendor};
    return $line
}
Basically, for every line in the source files, it is checking every line in the destination array to see if that vendor entry exists.

I'm wondering if there isn't more efficient way to locate the object in the array that matches $vendor.

Any ideas?

Potato Salad
Oct 23, 2014

nobody cares


The answer, in another language, might be to use a hashtable. To my knowledge PS has nothing like this.


Use python? PS is a control and orchestration language; it's not really intended to crunch data quickly.

Pile Of Garbage
May 28, 2007



The Fool posted:

Double posting because I have a question about improving the performance of a different project.


We have a system that generates to separate CSV files. These CSV's are related, but contain different data. All of the data generated is tied to a vendor ID number.

In order to eliminate some workload on the finance team, I wrote a script that merges these to files.

The tricky part is that the source files can have multiple lines for the same vendor that contains different data and that data needs to be consolidated into one line for the final file.

This all works, but it takes about 5 minutes to run on two files with 2500 records.

The biggest bottleneck is where I check to see if a vendor already has data in the the array using Where-Object.

The function:
code:
function Get-VoucherLine($voucherCSV, $vendor)
{
    $line = $voucherCSV | Where-Object {$_."VENDOR" -eq $vendor};
    return $line
}
Basically, for every line in the source files, it is checking every line in the destination array to see if that vendor entry exists.

I'm wondering if there isn't more efficient way to locate the object in the array that matches $vendor.

Any ideas?

Without seeing the entire script plus some sample data it's difficult to advise but it sounds like your issue is further up the chain. You're probably iterating through too many things, use "Select-Object -Unique" to extract your vendor IDs and then if necessary iterate through them. Again, it's hard to advise without samples.

Potato Salad posted:

The answer, in another language, might be to use a hashtable. To my knowledge PS has nothing like this.


Use python? PS is a control and orchestration language; it's not really intended to crunch data quickly.

PS does have hashtables but this sounds like a data transform problem. I'll agree, there's some data crunching that PS just can't do well and should be moved to a DB but I've processed some pretty large piles of garbage using PS quite efficiently.

Pile Of Garbage fucked around with this message at 17:38 on Mar 29, 2018

Potato Salad
Oct 23, 2014

nobody cares


cheese-cube posted:

PS does have hashtables but this sounds like a data transform problem. I'll agree, there's some data crunching that PS just can't do well and should be moved to a DB but I've processed some pretty large piles of garbage using PS quite efficiently.

!!!!!

Systems.Collections.Hashtable

Hello, handsome :shlick:

The Fool
Oct 16, 2003


cheese-cube posted:

Without seeing the entire script plus some sample data it's difficult to advise but it sounds like your issue is further up the chain. You're probably iterating through too many things, use "Select-Object -Unique" to extract your vendor IDs and then if necessary iterate through them. Again, it's hard to advise without samples.

I have two data files, they have more fields that contain data like contact names and addresses, and some other accounting codes that don't change.

The first file is a list of vendor codes and invoice amounts:
code:
vendor,invoice
453,$40
34,$39
67,$20
453,$32
The second is a list of vendor codes and deductions:
code:
vendor,deduction
453,$15
34,$10
34,$2
89,$32
I merge these together to into an array that I export to a csv at the end of the script. The final csv looks like this:
code:
vendor,deduction,invoice
453,$15,$72
34,$12,$39
67,$0,$20
89,$32,$0
I have two for-each loops, one processes each file.

The loops both look like this:
code:
    foreach ($line in $dividendVoucherCSV)
    {
        $temp = Get-VoucherLine $finalVoucherCSV $line."VENDOR"
        if (!$temp) {
            $temp = New-VoucherLine $line;
            $finalVoucherCSV += $temp;
            $temp = Get-VoucherLine $finalVoucherCSV $line."VENDOR"
        }
        $temp = Update-Voucher $temp $line;
I already posted the the Get-VoucherLine function.

New-VoucherLine creates a custom object with the all of the fields, and sets the fields that don't change.
Update-VoucherLine adds the deduction and invoice values to the existing line.

e:
Right after posting that I eliminated my second get-voucherline and that seemed to help a little bit, but not a huge amount.
code:
    foreach ($line in $dividendVoucherCSV)
    {
        $temp = Get-VoucherLine $finalVoucherCSV $line."VENDOR"
        if (!$temp) {
            $temp = New-VoucherLine $line;
            $temp = Update-Voucher $temp $line;
            $finalVoucherCSV += $temp;
            # $temp = Get-VoucherLine $finalVoucherCSV $line."VENDOR"
        } else {
            $temp = Update-Voucher $temp $line;
        }
}

mystes
May 31, 2006

Potato Salad posted:

The answer, in another language, might be to use a hashtable. To my knowledge PS has nothing like this.


Use python? PS is a control and orchestration language; it's not really intended to crunch data quickly.
Uh, no.

@{} will create a hashtable.

Potato Salad
Oct 23, 2014

nobody cares


Yeah, thanks to you and Cheese for pointing that out. There is some deeply convoluted poo poo I'm looking forward to fixing/simplifying Monday :)

The Fool
Oct 16, 2003


It took me more time to figure out how to export a hash table to csv than it did to convert my array to a hashtable.


I've got a 5x performance boost using a hashtable keyed on the vendor ID.

The export-csv statement looks like this now:
code:
($finalVoucherCSV.GetEnumerator() | Select-Object Value).Value | Export-CSV $destinationPath -NoTypeInformation

The Fool fucked around with this message at 19:55 on Mar 29, 2018

Pile Of Garbage
May 28, 2007



Exceptionally strange syntax but glad you got it.

Edit: Honestly it actually looks like you're just spitting out a single (1D) list. Out-File should suffice:

code:
$finalVoucherCSV.GetEnumerator() | Select-Object -ExpandProperty Value | Out-File -FilePath $destinationPath

Pile Of Garbage fucked around with this message at 20:29 on Mar 29, 2018

The Fool
Oct 16, 2003


Can you elaborate on what's strange about it?

Pile Of Garbage
May 28, 2007



So where you're trying to extract the "Value" property:

code:
($finalVoucherCSV.GetEnumerator() | Select-Object Value).Value
This can be done with Select-Object:

code:
$finalVoucherCSV.GetEnumerator() | Select-Object -ExpandProperty Value
Then from there as per my other post if you're just spitting out a 1D array of strings then it's easier to use Out-File.

The Fool
Oct 16, 2003


Not the first time I've forgotten about -ExpandProperty


cheese-cube posted:

Then from there as per my other post if you're just spitting out a 1D array of strings then it's easier to use Out-File.

It's actually a custom object with about 10 member properties.
e: each property corresponds to a column in the csv following a format defined by the financial program that is importing this file.

Wrath of the Bitch King
May 11, 2005

Research confirms that black is a color like silver is a color, and that beyond black is clarity.
It's really just a matter of if you want headers or not.

Anyone have any success with implementing pie charts into Powershell HTML reports? I have a fairly lengthy set of scripts that I use to generate HTML dashboards for SCCM, pull data out of AD, etc. I make heavy use of the EnhancedHTML2 module (Gallery Link) since it has a lot of built in niceties for tables, which is mainly what I use after I splat out data pulled from multiple sources to present.

Haven't had much success generating charts in Powershell, though, and I'm wondering if it makes more sense to just generate an HTML pie chart with CSS/Javascript and inject data pulled from the script to provide the specifics.

Pile Of Garbage
May 28, 2007



Wrath of the Bitch King posted:

It's really just a matter of if you want headers or not.

Anyone have any success with implementing pie charts into Powershell HTML reports? I have a fairly lengthy set of scripts that I use to generate HTML dashboards for SCCM, pull data out of AD, etc. I make heavy use of the EnhancedHTML2 module (Gallery Link) since it has a lot of built in niceties for tables, which is mainly what I use after I splat out data pulled from multiple sources to present.

Haven't had much success generating charts in Powershell, though, and I'm wondering if it makes more sense to just generate an HTML pie chart with CSS/Javascript and inject data pulled from the script to provide the specifics.

Do that IMO

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
I've got the problem defined by this snack overflow question: https://stackoverflow.com/questions/46649869/create-agent-job-from-power-shell-script-with-including-of-power-shell-command

It's trying to parse blocks like $(ESCAPE_SQUOTE(INST)) and errors out, because while INST is a valid SQL token, PowerShell doesn't understand. I don't think I can use --% in this scenario, since all the examples I can find are for non-PowerShell programs. I got around it by using -replace to escape these specific parts, and I might have been able to use something other than Invoke-SqlCmd, but is there a way to tell cmdlets not to parse blocks of text? Would something like & powershell.exe --% work?

PBS
Sep 21, 2015

The Fool posted:

Not the first time I've forgotten about -ExpandProperty


It's actually a custom object with about 10 member properties.
e: each property corresponds to a column in the csv following a format defined by the financial program that is importing this file.

You can use | Group -Property <Property Name> to group objects into an array by a certain property (like vendor), then iterate through and combine the values of the grouped objects. It's likely you'd see a performance boost doing this vs looping through looking for the vendor ID. (Not sure how much of a boost you'd see, seems like a small data set)

As the list grows (if it does), you can also leverage powershell jobs/runspaces to "multithread" the work. It's not too hard to write something basic for text processing.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
I agonized over how I should be doing this, and I think it's doing what I want it to do, but I wanted to see if there's anything I could be doing differently.

I'm working on a script to read settings from one SQL server to another, and I need to copy linked server settings from one to the other. The source server looks like this:
code:
> Invoke-Sqlcmd -ServerInstance 'sqlprd1' -Database 'master' -Query "EXEC sp_helpserver" | Where-Object {$_.Name -eq "sqldev1"} | ft

name    network_name                   status                           id   collation_name connect_timeout query_timeout
----    ------------                   ------                           --   -------------- --------------- -------------
SQLDEV1 SQLDEV1                        data access,use remote collation 1                                 0             0
And the destination server looks like this:
code:
> Invoke-Sqlcmd -ServerInstance 'sqlprd2' -Database 'master' -Query "EXEC sp_helpserver" | Where-Object {$_.Name -eq "sqldev1"} | ft

name    network_name                   status                                       id   collation_name connect_timeout query_timeout
----    ------------                   ------                                       --   -------------- --------------- -------------
SQLDEV1 SQLDEV1                        rpc,rpc out,data access,use remote collation 1                                 0             0
The status field is a comma-separated list of values, but I wasn't sure how I'd pull the values from one and apply them to the other. I played around with Compare-Object and came up with this
code:
> foreach ($b in $prd1linkopts) {
>>     $prd1status = $b.status -split ","
>>     $prd2status = ($prd2linkopts | Where-Object {$_.name -eq $b.name}).status -split ","
>>     Compare-Object $prd2status $prd1status
>> }

InputObject SideIndicator
----------- -------------
rpc         <=
rpc out     <=
rpc         <=
rpc out     <=
rpc         <=
rpc out     <=
which helped me create the following
code:
$prd1linkopts = Invoke-Sqlcmd -ServerInstance 'sqlprd1' -Database 'master' -Query "EXEC sp_helpserver" | Where-Object {$_.Name -ne "SQLPRD1"}
$prd2linkopts = Invoke-Sqlcmd -ServerInstance 'sqlprd2' -Database 'master' -Query "EXEC sp_helpserver" | Where-Object {$_.Name -ne "SQLPRD2"}
# TODO: check if these are $null

foreach ($b in $prd1linkopts) {
    $prd1status = $b.status -split ","
    $prd2status = ($prd2linkopts | Where-Object {$_.name -eq $b.name}).status -split ","
    foreach ($o in Compare-Object $prd1status $prd2status) {
        switch ($o.SideIndicator) {
            # Setting exists on prd2, and will be removed from prd2
            "=>" {
                #Invoke-Sqlcmd -ServerInstance 'sqlprd2' -Database 'master' -Query`
                    "EXEC sp_serveroption`
                    @server = `[$($b.name)`],`
                    @optname = `'$($o.InputObject)`',`
                    @optvalue = `'$($false)`'"
            }
            # Setting exists on prd1, and will be added to prd2
            "<=" {
                #Invoke-Sqlcmd -ServerInstance 'sqlprd2' -Database 'master' -Query`
                    "EXEC sp_serveroption`
                    @server = `[$($b.name)`],`
                    @optname = `'$($o.InputObject)`',`
                    @optvalue = `'$($true)`'"
            }
        }
    }
}
which returns what I want:
code:
EXEC sp_serveroption
                    @server = [SQLDEV1],
                    @optname = 'rpc',
                    @optvalue = 'False'
EXEC sp_serveroption
                    @server = [SQLDEV1],
                    @optname = 'rpc out',
                    @optvalue = 'False'
EXEC sp_serveroption
                    @server = [SQLDEV2],
                    @optname = 'rpc',
                    @optvalue = 'False'
EXEC sp_serveroption
                    @server = [SQLDEV2],
                    @optname = 'rpc out',
                    @optvalue = 'False'
EXEC sp_serveroption
                    @server = [SQLDEV3],
                    @optname = 'rpc',
                    @optvalue = 'False'
EXEC sp_serveroption
                    @server = [SQLDEV3],
                    @optname = 'rpc out',
                    @optvalue = 'False'
so I think I'm good. Am I good? Could I be doing something differently, or more efficiently?

slartibartfast
Nov 13, 2002
:toot:

anthonypants posted:

I'm working on a script to read settings from one SQL server to another, and I need to copy linked server settings from one to the other.

Could I be doing something differently, or more efficiently?

The dbatools project is amazing for interacting with SQL Server via Powershell. Specifically, take a look at their Copy-DbaLinkedServer function, which makes copying linked servers possible in one line of code. It does introduce a dependency on their module, but you can run it from your workstation to avoid installing anything on your SQL boxes.

sloshmonger
Mar 21, 2013
This is probably something so simple, but I just found out about #region and #endregion for grouping and I am loving blown away.

Dirt Road Junglist
Oct 8, 2010

We will be cruel
And through our cruelty
They will know who we are

cheese-cube posted:

Seconding this. Developing GUIs with PowerShell is dirty and IMO completely antithetical to how PowerShell is designed to be used.

Thirding.

Despite really desperately wanting to put GUIs on my scripts, it's just not what Powershell is for.

Ugato
Apr 9, 2009

We're not?

Dirt Road Junglist posted:

Thirding.

Despite really desperately wanting to put GUIs on my scripts, it's just not what Powershell is for.

I did it just because it was my only option to actually make a GUI-based tool for my team to use so we weren’t overwhelmed each night with the work volume. It’s doable but messy and definitely not pretty.

Dirt Road Junglist
Oct 8, 2010

We will be cruel
And through our cruelty
They will know who we are

Ugato posted:

I did it just because it was my only option to actually make a GUI-based tool for my team to use so we weren’t overwhelmed each night with the work volume. It’s doable but messy and definitely not pretty.

Yeah, that was my use case, too. I ended up making it something we could feed CSVs into with a few onscreen prompts, and it worked well enough.

PBS
Sep 21, 2015
Yeah, I think it's fine for internal tooling. Write something up and ship it off to the service desk. I'm not going to tell less technical/skilled users to do X via command line if I can avoid it.

If you really want to go the pain in the rear end route, you can use wpf instead of winforms.

thebigcow
Jan 3, 2001

Bully!

anthonypants posted:

I've got the problem defined by this snack overflow question: https://stackoverflow.com/questions/46649869/create-agent-job-from-power-shell-script-with-including-of-power-shell-command

It's trying to parse blocks like $(ESCAPE_SQUOTE(INST)) and errors out, because while INST is a valid SQL token, PowerShell doesn't understand. I don't think I can use --% in this scenario, since all the examples I can find are for non-PowerShell programs. I got around it by using -replace to escape these specific parts, and I might have been able to use something other than Invoke-SqlCmd, but is there a way to tell cmdlets not to parse blocks of text? Would something like & powershell.exe --% work?

I'm a little loopy right now so this may not be what your'e looking for.

Try single quotes instead of double quotes.

Or this https://ss64.com/ps/call.html

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

thebigcow posted:

I'm a little loopy right now so this may not be what your'e looking for.

Try single quotes instead of double quotes.

Or this https://ss64.com/ps/call.html
Placing a pathname inside of single quotes isn't going to do anything differently.

Collateral Damage
Jun 13, 2009

I just learned the hard way that Join-Path will fail if the drive letter doesn't exist on the executing machine, even if -Resolve isn't used. :downs:

pre:
PS C:\> get-psdrive -psprovider filesystem

Name           Used (GB)     Free (GB) Provider      Root                                                                 CurrentLocation
----           ---------     --------- --------      ----                                                                 ---------------
C                  64,03         55,21 FileSystem    C:\
F                 244,80        314,07 FileSystem    F:\
X                1537,33        462,67 FileSystem    \\server\share


PS C:\> join-path -path "c:\foo\bar" -childpath "baz\goons"
c:\foo\bar\baz\goons

PS C:\> join-path -path "d:\foo\bar" -childpath "baz\goons"
join-path : Cannot find drive. A drive with the name 'd' does not exist.
At line:1 char:1
+ join-path -path "d:\foo\bar" -childpath "baz\goons"
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (d:String) [Join-Path], DriveNotFoundException
    + FullyQualifiedErrorId : DriveNotFound,Microsoft.PowerShell.Commands.JoinPathCommand
[System.IO.Path]::Combine("d:\foo\bar","baz\goons") works though, and has the added benefit that you can join as many path elements as you want

Pile Of Garbage
May 28, 2007



Collateral Damage posted:

I just learned the hard way that Join-Path will fail if the drive letter doesn't exist on the executing machine, even if -Resolve isn't used. :downs:

[System.IO.Path]::Combine("d:\foo\bar","baz\goons") works though, and has the added benefit that you can join as many path elements as you want

That's good to know but also weird as hell from a design perspective. IMO Join-Path shouldn't be doing any path verification as that functionality is already provided by Test-Path. The Join-Path cmdlet should really just be a wrapper for that Combine() method you posted and yet based on the exception that's thrown (DriveNotFoundException) it looks like the cmdlet is using some other method in the System.IO namespace.

I was thinking that perhaps Microsoft did this because Combine() allows characters which are not valid in a path (e.g. *) however Join-Path allows such invalid characters without complaint.

Collateral Damage
Jun 13, 2009

cheese-cube posted:

That's good to know but also weird as hell from a design perspective. IMO Join-Path shouldn't be doing any path verification as that functionality is already provided by Test-Path.
Some googling reveals that this is a known issue and possibly by design. I assume it relies on the PSProvider to figure out the path format and if the device doesn't exist it can't know the PSProvider for that device, while System.IO.Path just uses the default path format for the system.

e: [System.IO.Path]::Combine has an oddity of its own where a lone drive letter must have a trailing slash.
pre:
PS C:\> [System.IO.Path]::Combine("c:","foo","bar","baz")
c:foo\bar\baz
PS C:\> [System.IO.Path]::Combine("c:\","foo","bar","baz")
c:\foo\bar\baz
PS C:\> [System.IO.Path]::Combine("c:\blah","foo","bar","baz")
c:\blah\foo\bar\ba

Collateral Damage fucked around with this message at 13:08 on Apr 20, 2018

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
What's the difference between a path combine and concatenation?

anthonypants fucked around with this message at 00:44 on Apr 21, 2018

Pile Of Garbage
May 28, 2007



anthonypants posted:

Wyatt's the difference between a path combine and concatenation?

Path combine can deal with ambiguous path fragments and will return a well-formed path. For example, these two commands return the same result:

code:
PS C:\> Join-Path -Path 'C:\Temp' -ChildPath 'temp.txt'
C:\Temp\temp.txt

PS C:\> Join-Path -Path 'C:\Temp\' -ChildPath 'temp.txt'
C:\Temp\temp.txt
It's really just a more robust method of constructing paths.

mystes
May 31, 2006

anthonypants posted:

Wyatt's the difference between a path combine and concatenation?
Slashes, mostly.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
From what I've seen, PowerShell doesn't seem to care very much about well-formed paths, which is good, I guess, but maybe it's cmdlet-specific?
code:
PS U:\> cd c:users\default
PS C:\users\default> cd C:/windows/system32
PS C:\windows\system32> Get-ChildItem C:/windows/system32\drivers/etc////


    Directory: C:\windows\system32\drivers\etc

devmd01
Mar 7, 2006

Elektronik
Supersonik
Needed a way to track Windows Management Framework versions across all my 2012R2 servers as I roll out WMF 5.1...I love -asjob.


code:
$servers=(Get-ADComputer -Filter 'OperatingSystem -like "*Windows Server 2012 R2*"' -Properties * -SearchBase "OU=Servers,DC=corporate,DC=domain,DC=com" | Select-Object -ExpandProperty Name | Sort-Object) | foreach-Object{invoke-command -computername $_ -scriptblock {$PSVersionTable.psversion} -asjob} | get-job | wait-job | receive-job | export-csv .\2012R2PowershellVersion.csv -notype
Any pssession errors are reflected to the console and I just skim them to verify that they're decommissioned machines that haven't been cleaned up yet.

devmd01 fucked around with this message at 19:43 on Apr 27, 2018

sloshmonger
Mar 21, 2013

devmd01 posted:

Needed a way to track Windows Management Framework versions across all my 2012R2 servers as I roll out WMF 5.1...I love -asjob.


code:
$servers=(Get-ADComputer -Filter 'OperatingSystem -like "*Windows Server 2012 R2*"' -Properties * -SearchBase "OU=Servers,DC=corporate,DC=domain,DC=com" | Select-Object -ExpandProperty Name | Sort-Object) | foreach-Object{invoke-command -computername $_ -scriptblock {$PSVersionTable.psversion} -asjob} | get-job | wait-job | receive-job | export-csv .\2012R2PowershellVersion.csv -notype
Any pssession errors are reflected to the console and I just skim them to verify that they're decommissioned machines that haven't been cleaned up yet.

I'm in the middle of this myself. Have you found a way to automate the installation of WMF 5.1 through powershell?

Pile Of Garbage
May 28, 2007



devmd01 posted:

Needed a way to track Windows Management Framework versions across all my 2012R2 servers as I roll out WMF 5.1...I love -asjob.


code:
$servers=(Get-ADComputer -Filter 'OperatingSystem -like "*Windows Server 2012 R2*"' -Properties * -SearchBase "OU=Servers,DC=corporate,DC=domain,DC=com" | Select-Object -ExpandProperty Name | Sort-Object) | foreach-Object{invoke-command -computername $_ -scriptblock {$PSVersionTable.psversion} -asjob} | get-job | wait-job | receive-job | export-csv .\2012R2PowershellVersion.csv -notype
Any pssession errors are reflected to the console and I just skim them to verify that they're decommissioned machines that haven't been cleaned up yet.

Why are you assigning the output of that expression to a variable?

sloshmonger posted:

I'm in the middle of this myself. Have you found a way to automate the installation of WMF 5.1 through powershell?

Really the best and most appropriate way is to package it and deploy via SCCM.

Also you should be aware that Microsoft have declared that WMF 5.1 is not compatible with some of their older products: https://docs.microsoft.com/en-us/powershell/wmf/5.1/productincompat. I suspect that it probably won't cause issues however if you do run into problems and raise a support case that might be the first thing Microsoft call out.

Edit: yeah it's pretty rubbish vvv

Pile Of Garbage fucked around with this message at 10:44 on Apr 28, 2018

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

cheese-cube posted:

Also you should be aware that Microsoft have declared that WMF 5.1 is not compatible with some of their older products: https://docs.microsoft.com/en-us/powershell/wmf/5.1/productincompat. I suspect that it probably won't cause issues however if you do run into problems and raise a support case that might be the first thing Microsoft call out.
This is a hilariously incomplete document.

Adbot
ADBOT LOVES YOU

devmd01
Mar 7, 2006

Elektronik
Supersonik

cheese-cube posted:

Why are you assigning the output of that expression to a variable?


Really the best and most appropriate way is to package it and deploy via SCCM.

Also you should be aware that Microsoft have declared that WMF 5.1 is not compatible with some of their older products: https://docs.microsoft.com/en-us/powershell/wmf/5.1/productincompat. I suspect that it probably won't cause issues however if you do run into problems and raise a support case that might be the first thing Microsoft call out.

Edit: yeah it's pretty rubbish vvv

Because i'm self-taught and slap together what I need from previous code bits as my understanding grows. But yeah, I see that the variable isn't needed now that you point it out.

I just purged the last of our exchange 2010 servers last month, we are in good shape to upgrade the rest of our 2012R2 systems to WMF 5.1. Sadly no SCCM here, so it's either one by one or a shutdown script gpo.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply