Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Zaepho posted:

code:
foreach ($i in (1..71)) {
   $NewFilename = "{0:D4}.png" -f $i
    Copy-item 0072.png $Newfilename
}
Test before use. I slapped it together without any testing/trial/etc. YMMV

Edit: OK I kinda tested and fixed an issue. Should pretty much work. Try a whatif first to make sure.

Just for fun, let's golf this:

code:
1..71|%{cp 0072.png ("{0:D4}"-f$_)} # 35 chars

# WhatIf version
1..71|%{cp 0072.png ("{0:D4}"-f$_)-wi} # 38 chars
This could be made better with certain assumptions, like replacing 0072.png with *.png or *g or some other short pattern that would still only match the intended file.

Briantist fucked around with this message at 03:52 on Aug 10, 2015

Adbot
ADBOT LOVES YOU

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.

Briantist posted:

Just for fun, let's golf this:

code:
1..71|%{cp 0072.png ("{0:D4}"-f$_)} # 35 chars

# WhatIf version
1..71|%{cp 0072.png ("{0:D4}"-f$_)-wi} # 38 chars
This could be made better with certain assumptions, like replacing 0072.png with *.png or *g or some other short pattern that would still only match the intended file.

No dice - 0073.png through ~0200.png were all there.

:)

Wicaeed
Feb 8, 2005
PowerCLI Time:

Trying to count the number of vCPUs that have been configured in a VM resource pool in our vCenter server. Problem is that we have similarly named resource pools across multiple clusters. I wrote something that can get me 90% of what I'm looking for, what I really need is the total count across all resource pool instances:

code:
Foreach ($rp in Get-Resourcepool -Name "ResourcePool") { 
    $vCPU = Get-VM -Location $rp | Measure-Object -Property NumCPU -SUM | Select -ExpandProperty Sum
    $rp | Select Name,
    @{N='vCPU assigned to VMs';E={$vcpu}}
    }
My results look something like this:

code:
Name                                                                                                            vCPU assigned to VMs
----                                                                                                            --------------------
prod-ubu14                                                                                                                   196
prod-ubu14                                                                                                                   108
prod-ubu14                                                                                                                   168
I'm wracking my brain trying to come up with a way to count all those instances of a result returned in a foreach loop and tally it up at the end. The problem is expanded by the fact that the results have the same name.

edit: Derp, apparently you can do this in one line on PowerCLI and not even have to worry about duplicate named resource pools.

code:
Get-VM -Location "ResourcePoolName" | Measure-Object -Property NumCPU -Sum

Wicaeed fucked around with this message at 01:49 on Aug 11, 2015

Toshimo
Aug 23, 2012

He's outta line...

But he's right!

Wicaeed posted:

PowerCLI Time:

Trying to count the number of vCPUs that have been configured in a VM resource pool in our vCenter server. Problem is that we have similarly named resource pools across multiple clusters. I wrote something that can get me 90% of what I'm looking for, what I really need is the total count across all resource pool instances:

code:
Foreach ($rp in Get-Resourcepool -Name "ResourcePool") { 
    $vCPU = Get-VM -Location $rp | Measure-Object -Property NumCPU -SUM | Select -ExpandProperty Sum
    $rp | Select Name,
    @{N='vCPU assigned to VMs';E={$vcpu}}
    }
My results look something like this:

code:
Name                                                                                                            vCPU assigned to VMs
----                                                                                                            --------------------
prod-ubu14                                                                                                                   196
prod-ubu14                                                                                                                   108
prod-ubu14                                                                                                                   168
I'm wracking my brain trying to come up with a way to count all those instances of a result returned in a foreach loop and tally it up at the end. The problem is expanded by the fact that the results have the same name.

Group-Object?

bgreman
Oct 8, 2005

ASK ME ABOUT STICKING WITH A YEARS-LONG LETS PLAY OF THE MOST COMPLICATED SPACE SIMULATION GAME INVENTED, PLAYING BOTH SIDES, AND SPENDING HOURS GOING ABOVE AND BEYOND TO ENSURE INTERNET STRANGERS ENJOY THEMSELVES
So I'm running into a weird issue that maybe someone in here can shed some light on. I inherited a Powershell script used to deploy our application to a formal testing environment. Most of the script (after some fixing) is working right, but the part that actually deploys the application components is causing me nightmares.

The application consists of several functional areas, WEB, XIN, IIN, SVC, etc. Each of those areas has multiple servers for load balancing. The servers are located in a remote data center. The script copies the code to a staging area on the "primary" box for each area (WEB01, XIN01, etc), then the deploy functionality is supposed to copy it from there to the other boxes for that area (WEB02, WEB03, XIN02, etc).

We have a list of servers for each area, and elsewhere in the code we iterate through and invoke the deployment for each server:

code:
If ($DeployWEB -eq $True)
{
    ForEach ($server in $global:serverListWEB)
    {
        Deploy-Remote -remoteServer $server -remoteSource "\\$global:backupWEB\d$\_Staging\*" -remoteDestination "d:\app\dmz\web"
    }
}
If ($DeployXIN -eq $True)
{
    ForEach ($server in $global:serverListXIN)
    {
        Deploy-Remote -remoteServer $server -remoteSource "\\$global:backupXIN\d$\_Staging\*" -remoteDestination "d:\app\dmz\xin"
    }
}
So we're copying FROM the staging area on the primary box (which also hosts the backup, hence that variable name) TO the deployment area.

Here's the code that's supposed to do that (the implementation of Deploy-Remote):

code:
Function Deploy-Remote
{
Param( #Function Paramaters
    [Parameter(Mandatory=$True)][string]$remoteServer,
    [Parameter(Mandatory=$True)][string]$remoteSource,
    [Parameter(Mandatory=$False)][string]$remoteDestination
) #End Function Paramaters
    $destinationUNC = ("\\$remoteServer\" + $remoteDestination -replace ":", "$")

    If(-Not(Test-Path -Path $destinationUNC))
    {
        New-Item $destinationUNC -ItemType Directory
    }
    Write-Host "Deploy initiated: [$remoteServer] $remoteSource to $remoteDestination, deleting existing files..."
    Log-Entry -LogFile $LogFile -LogLine "Deploy initiated: [$remoteServer] $remoteSource to <a href=`"$destinationUNC`">$remoteDestination</a>."
    
    Try
    {
        Invoke-Command -ComputerName $remoteServer {Remove-Item -Path $Using:remoteDestination -Recurse -ErrorAction SilentlyContinue}
    }
    Catch
    {
        Log-Error -LogFile $LogFile -LogLine $_ -logtoscreen $LogToScreen #-ExitGracefully $True
        Return
    }

    Try
    {
        Invoke-Command -ComputerName $remoteServer {Copy-Item -Path $Using:remoteSource -Destination $Using:remoteDestination -Recurse -ErrorAction SilentlyContinue}
    }
    Catch
    {
        Log-Error -LogFile $LogFile -LogLine $_ -logtoscreen $LogToScreen #-ExitGracefully $True
        Return
    }       
    
    Log-Entry -LogFile $LogFile -LogLine "Deploying $destinationUNC completed." -LogToScreen $LogToScreen
}
The Remove-Item part is working, because we're invoking into the same machine we're performing the remove from. No problem there.

The Copy-Item part fails every time that the $remoteServer isn't the same as the $remoteSource server. I.e., deploying from WEB01 Staging to WEB01 deployment works, because WEB01 is where the staging folder is. Deploying from WEB01 Staging to WEB02 deployment (i.e., $remoteServer = WEB02) fails with a path not found exception. This is strange to me, because if I manually remote into WEB02, and navigate to the Staging path on WEB01, I can access it just fine, using the same credentials that are invoking the script. Same is true for remoting into WEB01 and viewing the deployment path on WEB02.

I'm kind of at my wits end over this, and the guy who wrote the script in the first place can't figure out what's going on now that I've alerted him to the fact that it's broken. I've run the commands manually a number of times, removing the ErrorAction and adding Verbose, but it just revealed the error that was previously suppressed (the aforementioned path not found error). Any thoughts?

bgreman fucked around with this message at 21:47 on Aug 14, 2015

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

bgreman posted:

So I'm running into a weird issue that maybe someone in here can shed some light on. I inherited a Powershell script used to deploy our application to a formal testing environment. Most of the script (after some fixing) is working right, but the part that actually deploys the application components is causing me nightmares.

The application consists of several functional areas, WEB, XIN, IIN, SVC, etc. Each of those areas has multiple servers for load balancing. The servers are located in a remote data center. The script copies the code to a staging area on the "primary" box for each area (WEB01, XIN01, etc), then the deploy functionality is supposed to copy it from there to the other boxes for that area (WEB02, WEB03, XIN02, etc).

We have a list of servers for each area, and elsewhere in the code we iterate through and invoke the deployment for each server:

code:
If ($DeployWEB -eq $True)
{
    ForEach ($server in $global:serverListWEB)
    {
        Deploy-Remote -remoteServer $server -remoteSource "\\$global:backupWEB\d$\_Staging\*" -remoteDestination "d:\app\dmz\web"
    }
}
If ($DeployXIN -eq $True)
{
    ForEach ($server in $global:serverListXIN)
    {
        Deploy-Remote -remoteServer $server -remoteSource "\\$global:backupXIN\d$\_Staging\*" -remoteDestination "d:\app\dmz\xin"
    }
}
So we're copying FROM the staging area on the primary box (which also hosts the backup, hence that variable name) TO the deployment area.

Here's the code that's supposed to do that (the implementation of Deploy-Remote):

code:
Function Deploy-Remote
{
Param( #Function Paramaters
    [Parameter(Mandatory=$True)][string]$remoteServer,
    [Parameter(Mandatory=$True)][string]$remoteSource,
    [Parameter(Mandatory=$False)][string]$remoteDestination
) #End Function Paramaters
    $destinationUNC = ("\\$remoteServer\" + $remoteDestination -replace ":", "$")

    If(-Not(Test-Path -Path $destinationUNC))
    {
        New-Item $destinationUNC -ItemType Directory
    }
    Write-Host "Deploy initiated: [$remoteServer] $remoteSource to $remoteDestination, deleting existing files..."
    Log-Entry -LogFile $LogFile -LogLine "Deploy initiated: [$remoteServer] $remoteSource to <a href=`"$destinationUNC`">$remoteDestination</a>."
    
    Try
    {
        Invoke-Command -ComputerName $remoteServer {Remove-Item -Path $Using:remoteDestination -Recurse -ErrorAction SilentlyContinue}
    }
    Catch
    {
        Log-Error -LogFile $LogFile -LogLine $_ -logtoscreen $LogToScreen #-ExitGracefully $True
        Return
    }

    Try
    {
        Invoke-Command -ComputerName $remoteServer {Copy-Item -Path $Using:remoteSource -Destination $Using:remoteDestination -Recurse -ErrorAction SilentlyContinue}
    }
    Catch
    {
        Log-Error -LogFile $LogFile -LogLine $_ -logtoscreen $LogToScreen #-ExitGracefully $True
        Return
    }       
    
    Log-Entry -LogFile $LogFile -LogLine "Deploying $destinationUNC completed." -LogToScreen $LogToScreen
}
The Remove-Item part is working, because we're invoking into the same machine we're performing the remove from. No problem there.

The Copy-Item part fails every time that the $remoteServer isn't the same as the $remoteSource server. I.e., deploying from WEB01 Staging to WEB01 deployment works, because WEB01 is where the staging folder is. Deploying from WEB01 Staging to WEB02 deployment (i.e., $remoteServer = WEB02) fails with a path not found exception. This is strange to me, because if I manually remote into WEB02, and navigate to the Staging path on WEB01, I can access it just fine, using the same credentials that are invoking the script. Same is true for remoting into WEB01 and viewing the deployment path on WEB02.

I'm kind of at my wits end over this, and the guy who wrote the script in the first place can't figure out what's going on now that I've alerted him to the fact that it's broken. I've run the commands manually a number of times, removing the ErrorAction and adding Verbose, but it just revealed the error that was previously suppressed (the aforementioned path not found error). Any thoughts?

Sounds like a classic kerberos double hop issue to me. You remote into machine B from machine A (using Invoke-Command), once there, you try access a UNC path on machine C. That will fail, because the authentication token can't be delegated to another hop.

You say it's working when you remote into a machine and run the commands, but I suspect that means you're using remote desktop. That won't exhibit this behavior.

To test whether this is the issue, use powershell to interactively remote into the machine:

code:
Enter-PSSession -ComputerName remoteMachine
Now you will be in an interactive prompt. Try to Copy-Item from or to a UNC path. Should fail.

The function is already calculating a UNC path for the destination, so why not just Copy-Item from the machine running the script, from one UNC path to the other?

Also, it would help if your Copy-Item calls didn't have -ErrorAction SilentlyContinue on them; you might have actually been able to see an error!

bgreman
Oct 8, 2005

ASK ME ABOUT STICKING WITH A YEARS-LONG LETS PLAY OF THE MOST COMPLICATED SPACE SIMULATION GAME INVENTED, PLAYING BOTH SIDES, AND SPENDING HOURS GOING ABOVE AND BEYOND TO ENSURE INTERNET STRANGERS ENJOY THEMSELVES

Briantist posted:

The function is already calculating a UNC path for the destination, so why not just Copy-Item from the machine running the script, from one UNC path to the other?

While this works, what I worry about here is if the machine running the script is across a slow internet connection. Is Copy-Item smarter than normal windows copying, which I think copies the files locally when you try to copy between two UNC paths?

Right now all the machines actually hosting the application are VMs at the same data center. So WEB01->WEB03 is very fast. The machine that runs the script is (currently) a jump box VM at the same data center, but that state was only achieved last week. Previously, the jump box was a VM at a data center across the country. If that were to happen again, I can imagine the script execution time going way up if the files have to be copied locally before being copied out. I.e., WEB01->Jump Box->WEB03 is probably a lot slower.

hihifellow
Jun 17, 2005

seriously where the fuck did this genre come from

bgreman posted:

While this works, what I worry about here is if the machine running the script is across a slow internet connection. Is Copy-Item smarter than normal windows copying, which I think copies the files locally when you try to copy between two UNC paths?

Right now all the machines actually hosting the application are VMs at the same data center. So WEB01->WEB03 is very fast. The machine that runs the script is (currently) a jump box VM at the same data center, but that state was only achieved last week. Previously, the jump box was a VM at a data center across the country. If that were to happen again, I can imagine the script execution time going way up if the files have to be copied locally before being copied out. I.e., WEB01->Jump Box->WEB03 is probably a lot slower.

Anecdotal, but I found copy-item to be strangely flaky when copying over a slow link. I mean, this was a 50MB circuit source going to a 5MB saturated circuit destination, so very slow link. Directories would come across in the wrong location, files were missing, just general weirdness. Ended up replacing all the instances of copy-item with robocopy and the files started copying over fine. But this was also on computers running PS2.0 so it might have been fixed in later versions.

bgreman
Oct 8, 2005

ASK ME ABOUT STICKING WITH A YEARS-LONG LETS PLAY OF THE MOST COMPLICATED SPACE SIMULATION GAME INVENTED, PLAYING BOTH SIDES, AND SPENDING HOURS GOING ABOVE AND BEYOND TO ENSURE INTERNET STRANGERS ENJOY THEMSELVES
Copy-Item seems to be working for me now, with one bit of weirdness that I hope someone can explain.

If I run "Copy-Item -Path c:\mypath\* -Destination c:\dir\targetpath -Recurse" I get inconsistent behavior depending on whether the targetpath dir already exists.

In all cases, C:\mypath contains one or more directories, let's say just one "app1," containing files assembly1.dll, assembly2.dll

If targetpath already exists, things work like I expect. I wind up with c:\dir\targetpath\app1\assembly1.dll, etc. If targetpath does not already exist, I instead get c:\dir\targetpath\assembly1.dll (the app1 folder is missing). Anyone know why this might be? This is a problem for me because immediately before running this Copy-Item, I'm doing a Remove-Item on targetpath: Remove-Item c:\dir\targetpath -Recurse.

Ninja edit: Looks like it might think I'm trying to rename my app1 folder to targetpath during the copy in the case where targetpath doesn't exist? Does that sound right? Is there any way to suppress that behavior in the event targetpath doesn't already exist?

Edit to edit: Looks like just inserting a New-Item with the targetpath and -ItemType directory will ensure the targetpath dir always exists before the copy.

bgreman fucked around with this message at 22:34 on Aug 17, 2015

Hadlock
Nov 9, 2004

Most of my issues with recursion c:\path\* have been solved by wrapping the path in quotes. Even if I'm passing in a variable path. I have no idea why this works but it does.

bgreman
Oct 8, 2005

ASK ME ABOUT STICKING WITH A YEARS-LONG LETS PLAY OF THE MOST COMPLICATED SPACE SIMULATION GAME INVENTED, PLAYING BOTH SIDES, AND SPENDING HOURS GOING ABOVE AND BEYOND TO ENSURE INTERNET STRANGERS ENJOY THEMSELVES

Hadlock posted:

Most of my issues with recursion c:\path\* have been solved by wrapping the path in quotes. Even if I'm passing in a variable path. I have no idea why this works but it does.

The paths were already being quote-wrapped when assigned to variables. Reading the documentation, it looks like the Destination flag will also serve as a folder rename if the destination path doesn't already exist.

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

bgreman posted:

While this works, what I worry about here is if the machine running the script is across a slow internet connection. Is Copy-Item smarter than normal windows copying, which I think copies the files locally when you try to copy between two UNC paths?

Right now all the machines actually hosting the application are VMs at the same data center. So WEB01->WEB03 is very fast. The machine that runs the script is (currently) a jump box VM at the same data center, but that state was only achieved last week. Previously, the jump box was a VM at a data center across the country. If that were to happen again, I can imagine the script execution time going way up if the files have to be copied locally before being copied out. I.e., WEB01->Jump Box->WEB03 is probably a lot slower.
Ah, I see this could be a problem then.

I will say that Copy-Item is not at all smarter than anything really. For the other weirdness, it doesn't surprise me all that much, but it looks like you got it solved with New-Item.

To be honest, robocopy is still vastly superior, in my opinion, when it comes to copying lots of files or whole directory trees, so I tend to still use that even though I generally hate shelling out to executables when there's a programmatic way to accomplish something.

As for the issue of a slow link, there's a few things you could do.

When you remote into the machine, instead of doing the copy, you can create a scheduled task that does the copy. You could generate powershell code, Base64 encode it, and then pass it to powershell.exe with -EncodedCommand, so there's no file to deal with in the scheduled task definition. The problem with this is that you'll need to set it up with credentials unless your computer accounts have access to UNC paths, so this method not be that great for this purpose.

Another thing you can do is create an endpoint with a runas user. When you make a powershell remoting connection you are already connecting to an existing configuration called Microsoft.PowerShell. You can make your own configuration on the target(s), and configure it with a RunAs user. You can also control who is allowed to connect to it. The commands will run as the runas user, and in that instance because there is no passing of kerberos tickets, there is no double hop problem.

Have a look at this this answer I wrote on SO, it goes into some detail about that.

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
I'm working on a script at work and I'd like to implement a user-friendly UI for a task.

The script is taking a number of uninitialized raw drives and is going to set them up in accordance with our standards.

Let's say that the four drives are 1, 5, 3 and 4GB in size, and should get names like "Volume A" "Volume B" and so on. A <-> 1, B <->5, etc.

I'm going to assume that the drives were build in the correct order and provide a confirmation prompt showing the suggested volumes and sizes.

If it's wrong, I plan to say something like
Which Disk Number should be Volume A ?:
Which Disk Number should be Volume B ?:

I'm wondering if there are any cooler ways to do this, like an interactive menu or something.

If not, any general ideas on how to make an idiot-resistant interface?

Video Nasty
Jun 17, 2003

The best advice I found for UI interfaces is here: http://blog.danskingdom.com/powershell-multi-line-input-box-dialog-open-file-dialog-folder-browser-dialog-input-box-and-message-box/

It's practically what you're looking for, but this gets actual drive information. Not sure whether you're interacting with PSDrives, virtual drives or what. I believe there is a flag you can set to allow renaming folders /files /objects but I couldn't tell you what it is.

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
I'm actually running into a problem.

I've got these drives and I want to mount them to NTFS folders.
https://technet.microsoft.com/en-us/library/Cc753321.aspx

I can do it with the gui, I think I can work it out with diskpart.

Is there a way to do it through Powershell?

Factor Mystic
Mar 20, 2006

Baby's First Post-Apocalyptic Fiction

Dr. Arbitrary posted:

Anyone have recommendations for intermediate level Powershell books. I've already read month of lunches (although I might order toolmaking in a month of lunches).

Also curious about this. I have basic powershell awareness, but I'd really like to find a resource that will learn me some v5-era knowledge so I'm all up to date. I'm particularly interested in remoting concepts and DSC, which I know almost nothing about. Looks like the Powershell in a Month Of Lunches is v3.... how much is still valid? How much has been superceded by newer techniques where learning the old stuff is now The Bad Way?

Venusy
Feb 21, 2007
Remoting is a v2 concept, so I would hope that it would be covered in Month of Lunches, but I can remember the author covering it quite well in some of their longer recorded training sessions (unfortunately I can't remember which ones :shrug:).

I still need to get started using DSC (work is primarily a v2 environment), but there's the PowerShell.org videos, or the Microsoft Virtual Academy course.

Hadlock
Nov 9, 2004

There's a big jump from v2 to v3, but the jump from v3 to v4 is much much smaller, and except for formal class support and some nice features like DSC and of course the package managers, the base language is largely complete now at v4 for the average admin managing less than 100 servers.

V5 from what I can tell is mostly feature additions for Microsoft's cloud business (hyper-v virtual machines, virtual switch management, virtualized storage management, etc) with improvements on DSC introduced in v4.

We've standardized on v3 in my environment and while some machines have v4 I haven't run in to anything I can't do in 3 that I need 4 for. V5 Classes are neat but you can build effective classes in v2 with not much extra effort so we probably won't standardize on 5 this decade.

Hadlock fucked around with this message at 18:27 on Aug 22, 2015

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Factor Mystic posted:

Also curious about this. I have basic powershell awareness, but I'd really like to find a resource that will learn me some v5-era knowledge so I'm all up to date. I'm particularly interested in remoting concepts and DSC, which I know almost nothing about. Looks like the Powershell in a Month Of Lunches is v3.... how much is still valid? How much has been superceded by newer techniques where learning the old stuff is now The Bad Way?
DSC as a concept is really its own animal, and has little to do with PowerShell for the most part. I'm a technical reviewer for a DSC book coming out through Packt publishing, and it's a good overview of DSC. I could let you know when it's out?

Remoting doesn't require that much depth to start using it effectively. To get started, go to a target machine, run Enable-PSRemoting, go to a client machine then run Enter-PSSession -ComputerName target

Now you're in an interactive powershell prompt on that machine. But that's for interactive use, not really for scripting.

To do it within scripts use Invoke-Command -ComputerName target -ScriptBlock { commands to run } .

You can start doing advanced stuff like creating a session object first and passing that instead of using -ComputerName. That effectively means that you can do multiple Invoke-Command calls against the same session and it has the same state between each call.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Briantist posted:

DSC as a concept is really its own animal, and has little to do with PowerShell for the most part. I'm a technical reviewer for a DSC book coming out through Packt publishing, and it's a good overview of DSC. I could let you know when it's out?

Remoting doesn't require that much depth to start using it effectively. To get started, go to a target machine, run Enable-PSRemoting, go to a client machine then run Enter-PSSession -ComputerName target

Now you're in an interactive powershell prompt on that machine. But that's for interactive use, not really for scripting.

To do it within scripts use Invoke-Command -ComputerName target -ScriptBlock { commands to run } .

You can start doing advanced stuff like creating a session object first and passing that instead of using -ComputerName. That effectively means that you can do multiple Invoke-Command calls against the same session and it has the same state between each call.

You probably need to run enable-psremoting -force before PS remoting will work, at least in my experience. Getting remoting working is a giant PITA, especially when crossing domain boundaries.

DSC is awesome in concept but in practice I don't think it's "there" yet. The expectation is that the community will generate DSC resources, which just isn't true at the moment. The ALM rangers did a bunch for managing common things like web servers and windows services, but they don't loving work half the time or have bugs that ruin their idempotency. And to top it off, the resources aren't easily discoverable. Chef, on the other hand, has a huge marketplace of cookbooks for every scenario imaginable. I dislike Chef (it's way too fiddly to install and configure because of the nightmarish web of dependencies it has), but it's superior to DSC in every way right now and fills the same niche. There needs to be a DSC equivalent to the Chef Supermarket and a way to search it while writing your DSC scripts for DSC to gain any traction. And that's for Windows. For Linux, DSC is brand new and has basically nothing. If I can't manage a LAMP stack with DSC, easily, why would I use it over a competing tool?

Venusy
Feb 21, 2007
A way to search for them like Find-DSCResource on PowerShell v5?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Venusy posted:

A way to search for them like Find-DSCResource on PowerShell v5?

Thank god. I haven't been keeping too close of an eye on PowerShell 5, so I must have missed that. I knew they were doing OneGet/package management, but I didn't realize it extended to DSC resources. Hopefully it operates similar to NuGet, in that if you use a DSC resource that's not installed on the system, it's automatically downloaded and installed... Dealing with DSC modules kind of sucks right now because you have to explicitly install them everywhere prior to using them. I just keep them in source control and run a "custom resources" DSC script prior to applying the actual DSC script. In Chef world you have your Chef repo that just gets cloned to every machine that runs your cookbooks.

New Yorp New Yorp fucked around with this message at 20:14 on Aug 23, 2015

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Ithaqua posted:

You probably need to run enable-psremoting -force before PS remoting will work, at least in my experience. Getting remoting working is a giant PITA, especially when crossing domain boundaries.

DSC is awesome in concept but in practice I don't think it's "there" yet. The expectation is that the community will generate DSC resources, which just isn't true at the moment. The ALM rangers did a bunch for managing common things like web servers and windows services, but they don't loving work half the time or have bugs that ruin their idempotency. And to top it off, the resources aren't easily discoverable. Chef, on the other hand, has a huge marketplace of cookbooks for every scenario imaginable. I dislike Chef (it's way too fiddly to install and configure because of the nightmarish web of dependencies it has), but it's superior to DSC in every way right now and fills the same niche. There needs to be a DSC equivalent to the Chef Supermarket and a way to search it while writing your DSC scripts for DSC to gain any traction. And that's for Windows. For Linux, DSC is brand new and has basically nothing. If I can't manage a LAMP stack with DSC, easily, why would I use it over a competing tool?

Re: Enable-PSRemoting, I did mention running that it my post. PSRemoting has been extremely smooth for me in a domain environment. I should have mentioned that. I still haven't delved too deeply into setting it up across domains or in non-domain environments (because I only have a single forest single domain, and like 7 DMZ windows servers compared to 150+ in AD), but it's definitely more work that way. Still it's something on my very long list of things to do, and I'm looking forward to tackling it.

Ithaqua posted:

Thank god. I haven't been keeping too close of an eye on PowerShell 5, so I must have missed that. I knew they were doing OneGet/package management, but I didn't realize it extended to DSC resources. Hopefully it operates similar to NuGet, in that if you use a DSC resource that's not installed on the system, it's automatically downloaded and installed... Dealing with DSC modules kind of sucks right now because you have to explicitly install them everywhere prior to using them. I just keep them in source control and run a "custom resources" DSC script prior to applying the actual DSC script. In Chef world you have your Chef repo that just gets cloned to every machine that runs your cookbooks.


There's no doubt that v4 DSC is extremely bare. That being said, I actually quite like it, and v5 has addressed some of the major pain points. Even in v4 though you don't have to explicitly install resource modules everywhere as long as you use a Pull server, which most everyone who uses DSC does. That's one of the biggest advantages of a Pull server is that it automatically determines the modules you need and downloads them and installs them on the target node.

Chef and Puppet are definitely more mature, by leaps and bounds, but DSC is getting there, and Microsoft is definitely adapting a more agile release strategy (see WMF 5), so I think we'll start to see those improvements a lot more quickly without having to wait for Windows Server 2020, 2024, etc.

I think my favorite thing about DSC that (I think) you don't find in other CM products is encrypted credential support. Ostensibly, you can use this to securely include any secret information in your configs.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Briantist posted:

Even in v4 though you don't have to explicitly install resource modules everywhere as long as you use a Pull server, which most everyone who uses DSC does. That's one of the biggest advantages of a Pull server is that it automatically determines the modules you need and downloads them and installs them on the target node.

That may be true for people using it strictly for environment configuration management, but that's definitely not the case for those using it as part of a release pipeline for software. I see plenty of folks using DSC, but have yet to see anyone using a pull server.

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Ithaqua posted:

That may be true for people using it strictly for environment configuration management, but that's definitely not the case for those using it as part of a release pipeline for software. I see plenty of folks using DSC, but have yet to see anyone using a pull server.
Oh that's interesting, I hadn't thought about it from that point of view. I'd love to hear more about your workflow.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Briantist posted:

Oh that's interesting, I hadn't thought about it from that point of view. I'd love to hear more about your workflow.

It's not my workflow per se, it's my clients' workflows. They use DSC in conjunction with a release management tool (like Microsoft's Release Management!) to push their software to environments and enforce correct configuration of those environments. Some may well eventually set up a pull server and decouple "pushing bits" from "enforcing correct configuration", but the infrastructure management part is really secondary from their perspective. Their driving consideration is that their software is correctly installed and running, and that configuration changes are source controlled, auditible, and comprehensible to non-developers.

They don't need a pull server because the enforced deployment pipeline (Dev -> QA -> Staging -> Prod, for example) is going to shake out any configuration drift problems in lower environments like QA or staging.

New Yorp New Yorp fucked around with this message at 02:23 on Aug 24, 2015

Methanar
Sep 26, 2013

by the sex ghost
Hi, I make bad decisions and need to unbreak a bunch of dns settings.

I was thinking I could push out this command to all affected machines.

quote:

PS C:\> Invoke-Command -computername (get-content workstations.txt) -command { set-networkadapterdns "local area connection" -dnsserver 129.129.30.7, 8.8.8.8 }

My first problem though is how can I populate workstations.txt with all computers running windows 7?

Secondly, how can I enable remote powershell access on all these workstations. I was thinking I could try to push some kind of GPO to do it for me but it turns out you need functioning DNS for GPs to push to clients.

Thirdly, am I doing this entirely wrong and is there a better way? Such as adding another vnic to an existing DNS server with the IP that all the incorrect DNS settings are pointing to? At least as a temporary fix.

AreWeDrunkYet
Jul 8, 2006

Methanar posted:

Hi, I make bad decisions and need to unbreak a bunch of dns settings.

I was thinking I could push out this command to all affected machines.


My first problem though is how can I populate workstations.txt with all computers running windows 7?

Secondly, how can I enable remote powershell access on all these workstations. I was thinking I could try to push some kind of GPO to do it for me but it turns out you need functioning DNS for GPs to push to clients.

Thirdly, am I doing this entirely wrong and is there a better way? Such as adding another vnic to an existing DNS server with the IP that all the incorrect DNS settings are pointing to? At least as a temporary fix.

(get-adcomputer -filter * -properties operatingsystem | where-object {$_.operatingsystem -like "*7*"}).name | out-file win7comps.txt

As for the rest of it, have you disabled the local admin accounts on those computers, and do you know the passwords? It's not exactly powershell, but you could use net use to map, say, c:\temp on the remote machine with the local credentials. Then put some netsh commands in a batch file, copy over the batch file, and execute it with psexec (again using local credentials). It's dirty and old-fashioned, but it may get the job done.

AreWeDrunkYet fucked around with this message at 02:11 on Aug 26, 2015

Venusy
Feb 21, 2007
I had to do this in our v2 environment without remoting enabled. Can't get the exact script I used right now, but from memory:
code:
$dnsservers = "129.129.30.7", "8.8.8.8"
(get-wmiobject win32_networkadapterconfiguration -Filter {ipenabled="true"} -ComputerName (Get-Content Computers.txt)).SetDNSServerSearchOrder($dnsservers)
I did it in a foreach loop rather than like this, and I need to double check the syntax for SetDNSServerSearchOrder.

unclenutzzy
Jun 6, 2007
This seems so simple but I'm just not getting it.

On Server 2008 R2, I need to consolidate a bunch of AD groups into one mega group. I'm not allowed to just add the existing groups as members, it needs to be user accounts as members. I've been trying to do this:

code:
 $UserAccounts = Get-ADGroup -filter { Name -like "Group Naming Scheme *" } | Get-ADGroupMember
This returns me all of the users I need and piping it to Get-Member tells me the type is Microsoft.ActiveDirectory.Management.ADPrincipal. Why, then, when I run this:

code:
 Get-ADGroup -Identity "Mega Group" | Add-ADGroupMember -Members $UserAccounts
does it error out telling me that "The specified account name is already a member of the group." I can confirm in both AD and PowerShell that the group is empty - what is it trying to add? What am I missing? I've tried this a bunch of different ways and the only other error I was getting was Cannot convert SystemObject into ADPrincipal whatever it is. Even when piping that variable to GM it would tell me it's an acceptable type for Add-ADGroupMember. I wrote those bits a few hours ago but I had to change gears so I don't have them on hand. I'll try and recreate it.

unclenutzzy fucked around with this message at 20:24 on Aug 27, 2015

AreWeDrunkYet
Jul 8, 2006

code:
$groups = Get-ADGroup -filter { Name -like "Group Naming Scheme *" }
foreach($group in $groups){
     $useraccounts = get-adgroupmember $group
     foreach($useraccount in $useraccounts){
              get-adgroup "megagroup" | add-adgroupmember -members $useraccount
     }
}

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

AreWeDrunkYet posted:

(get-adcomputer -filter * -properties operatingsystem | where-object {$_.operatingsystem -like "*7*"}).name | out-file win7comps.txt

As for the rest of it, have you disabled the local admin accounts on those computers, and do you know the passwords? It's not exactly powershell, but you could use net use to map, say, c:\temp on the remote machine with the local credentials. Then put some netsh commands in a batch file, copy over the batch file, and execute it with psexec (again using local credentials). It's dirty and old-fashioned, but it may get the job done.
Probably being pedantic here unless you're using this in a very large organization, but it's better to do the filtering within the cmdlet:

code:
Get-ADComputer -Filter { OperatingSystem -like "*7*" } | Select-Object -ExpandProperty Name | Out-File win7comps.txt
Also used Select-Object in the pipleine so that each line can be written at a time, just because this seems more "PowerShell" to me but it's likely that Get-ADComputer is getting all the results from the domain controller in one call anyway, so it doesn't really matter.

AreWeDrunkYet
Jul 8, 2006

Briantist posted:

Probably being pedantic here unless you're using this in a very large organization, but it's better to do the filtering within the cmdlet:
code:
Get-ADComputer -Filter { OperatingSystem -like "*7*" } | Select-Object -ExpandProperty Name | Out-File win7comps.txt

Doing the filtering inside the cmdlet is actually probably better in a larger organization, since using filter * and where-object returns all the computers then runs through them with where-object. Using the filter within the get-adcomputer cmdlet only returns the relevant computers.

The issue I've run into though is that -like is not an acceptable operator for some properties. I've found that it doesn't work on distinguishedname, and presumably there are others so I usually just default to where-object unless I know it's going to be a large search.

Annnnd this actually prompted me to look it up, turns out it's just limited to DNs:

quote:

The wildcard character "*" is allowed, except when the <AD Attribute> is a DN attribute. Examples of DN attributes are distinguishedName, manager, directReports, member, and memberOf. If the attribute is DN, then only the equality operator is allowed and you must specify the full distinguished name for the value (or the "*" character for all objects with any value for the attribute).

By the way, yours would still need to be

code:
Get-ADComputer -Filter {OperatingSystem -like "*7*"} -property operatingsystem | Select-Object -ExpandProperty Name | Out-File win7comps.txt

AreWeDrunkYet fucked around with this message at 20:54 on Aug 29, 2015

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

AreWeDrunkYet posted:

By the way, yours would still need to be

code:
Get-ADComputer -Filter {OperatingSystem -like "*7*"} -property operatingsystem | Select-Object -ExpandProperty Name | Out-File win7comps.txt
Good catch; thanks!

I did forget to include -Property OperatingSystem but in fact it's not necessary (in this case). This is another in a long list of weirdness in the AD cmdlets. Sometimes you have to reference the property explicitly to filter on it, sometimes you don't, so to be safe, I do always specify the property explicitly (except when I forget to).

One caveat to filtering on a property without specifying it is that even though the filter will work, it won't return that property unless it's one of the defaults (in this case it doesn't matter since we're only after the computer name), so yet another reason to explicitly reference it.

Hadlock
Nov 9, 2004

I just found out that Notepad++ has Sublime Text-style document map feature called "Document Map"

Also it looks like Powershell ISE has plugin support; does anyone know a way to get "Document Map" functionality in to Powershell ISE? Or alternately the Powershell Console window in to Notepad++? How Powershell ISE handles scripting is vaugely similar to Python so I suspect it's possble, I just haven't figured that out yet.

I guess what I'm trying to do here is shoehorn in my favorite parts of Sublime Text and Powershell ISE into Notepad++

12 rats tied together
Sep 7, 2006

The SublimeREPL addon for sublime text 3 has a Powershell repl/interpreter/ise/whatever that works pretty great right out of the box. Not really sure how it works in the back-end with respects to modules, permissions, inheritence etc but it appears to just piggyback off of your most current ISE session.

It doesn't have intellisense. Doesn't handle "clear" very well, and it's technically some imitation of an interactive prompt in a program that, to my knowledge, wasn't really designed to support interactive prompts. It's much better than the terminal/IRC client plugins, though because at least you aren't able to "backspace" over the prompt and cause the frame to hang indefinitely and then crash the editor. I don't really see a ton of reason to use it over the ISE, though, unless you're doing a lot of work in other scripting languages as well.


e: I guess one nice thing is that it handles having multiple scripts/tabs open much better than the ISE does, and I also don't believe that the PS4 ISE has support for the working folder or "open project" options that sublime does. It's also much easier to scroll through your output history, you get a built-in regex checker, and the regex checker also parses your output history because technically it is just a text file. This is actually pretty cool, in general.

12 rats tied together fucked around with this message at 16:36 on Aug 31, 2015

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
Man, I worked really hard on a script that takes some uninitialized disks, partitions them and then mounts them to folders.

I found out today that it needs to work on Server 2008.

Anyone have any tips on how to manipulate disks with .NET or WMI or something?

I've been spoiled by Get-Disk.

Hadlock
Nov 9, 2004

I didn't know Sublime has the script output stuff that ise does, I'll have to look at that again. I looked at Atom and it looks like it might too, but I can't find a way to do Sublime style document map in it. Maybe I can just get my office to expense my copy of Sublime...

Re: powershell on 2008

2008 or 2008 R2?

R2 has support for at least psv4

Otherwise, try talking to diskpart.msc via COM or whatever API language it talks. Diskpart is a pretty console friendly set of tools iirc.

AreWeDrunkYet
Jul 8, 2006

Dr. Arbitrary posted:

Man, I worked really hard on a script that takes some uninitialized disks, partitions them and then mounts them to folders.

I found out today that it needs to work on Server 2008.

Anyone have any tips on how to manipulate disks with .NET or WMI or something?

I've been spoiled by Get-Disk.

How about diskpart? Write a diskpart batch file with whatever variables you need in Powershell, then launch diskpart /s.

Adbot
ADBOT LOVES YOU

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
Diskpart is working great. Thanks for the suggestion!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply