Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
nielsm
Jun 1, 2009



A slightly philosophical question:

What is the verb to use, if you want to set some implicit state/defaults for subsequent commands to act on?
Is that even a "sanctioned" idiom in Powershell?

The specific case is that I'm working on a module to manage a set of application config files (XML), where you may need to add, remove, and modify items inside.

I currently read the files, convert the items from the XML representation to simple .NET objects (defined as C# classes), and have some tools to work on those. Then a few commands to update the config files with the new objects. The commands that read/write from files takes parameters to indicate the files to work on, and I have defaults for the cmdlets set up to use the most common config file set.
The idea I have is writing a command that sets some implicit state to control which config file set the other commands will work on, if the file parameters aren't given on each command.


Command names I have considered:

Select-FooConfig - bad, since Select is supposed to be used to pick out elements from a collection, not change state

Enter-FooConfig - seems reasonable, but only common command is Enter-PSSession which effectively changes the entire environment, also implies a stack of states

Push-FooConfig - seems reasonable, main sample is Push-Location which is less whole-environment affecting than Enter-PSSession, also implies a stack of states

Use-FooConfig - maybe? not really any commands I know of that use it

Set-FooConfig - not really, since it doesn't change any permanent state by itself

Adbot
ADBOT LOVES YOU

nielsm
Jun 1, 2009



Zaepho posted:

Have you thought about New-FooConfig You're creating a Configuration object of some sort to pass to all the other Foo cmdlets. like a DB Connection object.

Not quite... my intention is to have it be implicit state, similar to the current directory.

Edit: On the other hand, the command aliased to "cd" is Set-Location which doesn't really do permanent state changes either. So maybe Set-FooCurrentConfig would be it.

nielsm fucked around with this message at 15:31 on Dec 10, 2015

nielsm
Jun 1, 2009



Briantist posted:

I would recommend not storing state in your own variable or whatever. That's not really done in powershell for the most part. When you think about something like Push-Location, it's really modifying the state of the system (at least the current environment; by changing the working directory), so it's not limited to a script.

I think the way to go, as distasteful as it might sound, is to make every cmdlet take the parameter, and make it mandatory.

To set defaults, the caller can use $PSDefaultParameterValues. That really is the idiomatic way to do this in PowerShell.

If you implement a cmdlet, its behavior should probably be to set the relevant value in $PSDefaultParameterValues (this is actually useful, because the caller may not know or want to enumerate all of the commands it applies to). Just make sure you preserve any existing values in the hashtable.

For that purpose, I think Set-FooDefaultConfig is an appropriate name.

That's a really good point, and since the config filename parameters I'm already taking have quite unique names, it wouldn't clash with other things either, even if I use wildcard command names in the $PSDefaultParameterValues hash.

Thanks.

nielsm
Jun 1, 2009



Okay another question, regarding the same module. I'm having trouble making my Get-FooItems command take either name, id, or nothing, and also have a "comfortable" syntax.

code:
function Get-FooItems
{
    [CmdletBinding(DefaultParameterSetName="All", PositionalBinding=$false)]
    #[OutputType([FooItem])]
    Param
    (
        [Parameter(Mandatory=$true, ParameterSetName="Id", ValueFromRemainingArguments=$True, ValueFromPipeline=$true)]
        [int[]]
        $Id,

        [Parameter(Mandatory=$true, ParameterSetName="Name", Position=0, ValueFromRemainingArguments=$True, ValueFromPipeline=$true)]
        [string[]]
        $Name,

        [Parameter(Mandatory=$false, ValueFromPipelineByPropertyName=$false)]
        [Alias("File")]
        [string]
        $FooConfigFile = "\\server\with\file.xml"
    )

    Begin
    {
        #$xml = [xml](Get-Content $FooConfigFile)
    }
    Process
    {
        if ($psCmdlet.ParameterSetName -eq "Id") {
            $Id | % {"x$_"} #get object with id
        }
        elseif ($psCmdlet.ParameterSetName -eq "Name") {
            $Name | % {"y$_"} #get objects matching name
        }
        else {
            1,2,3 #get all objects
        }
    }
}
The Get-Help output looks reasonable:
pre:
SYNTAX
    Get-FooItems [-FooConfigFile <string>]  [<CommonParameters>]
    
    Get-FooItems -Id <int[]> [-FooConfigFile <string>]  [<CommonParameters>]
    
    Get-FooItems [-Name] <string[]> [-FooConfigFile <string>]  [<CommonParameters>]
But actually calling the command doesn't work as intended.

Works:
pre:
Get-FooItems                     # gets all
Get-FooItems -Id 1234            # gets a single id
Get-FooItems -Id 123,456         # gets multiple id's
Get-FooItems -Name abcd          # gets by a single name
Get-FooItems -Name abc,def       # gets by multiple names
123,456,789 | Get-FooItems       # gets multiple id's
"abc","def","ghi" | Get-FooItems # gets by multiple names
Fails:
pre:
Get-FooItems abc           # (A) expected to work, get items by single name
Get-FooItems 123           # (B) not expected to work, get item by id
Get-FooItems abc,def       # (C) expected to work, get items by multiple names
Get-FooItems 123,456       # (D) not expected to work, get items by multiple id's
Get-FooItems abc def       # (E) expected to work, get items by multiple names
Get-FooItems 123 456       # (F) not expected to work, get items by multiple id's
Get-FooItems -Name abc def # (G) expected to work, get items by multiple names
Get-FooItems -Id 123 456   # (H) expected to work, get items by multiple id's
Case B, D and F are supposed to fail, since the Id parameter shouldn't be positional.
However cases A, C and E should work, since the Name parameter is positional, the FooConfigFile parameter is not positional, and the -Name flag is supposed to be optional (per the generated help).
Cases G and H should work with values specified as further arguments, since both Id and Name parameters are specified as ValueFromRemainingArguments. Shouldn't they then take any later positional arguments?

nielsm
Jun 1, 2009



Briantist posted:

This is a tough one. The parameter binding process sometimes seems to bind things in unexpected ways which leads to ambiguity where there appears to be none. This is compounded by pipeline support, ValueFromRemainingArguments, multiple parameter sets, and positional binding, and you've combined them all!

Some questions, since your intentions are a bit ambiguous to me:

Are those 3 parameter sets you see in the help intended? That is, do you really want a third parameter set where only $FooConfigFile is specified? Or did you actually just want that to be an optional parameter on the other two sets?

Do you really need ValueFromRemainingArguments? I find that this is usually a Bad Idea and it's really only used for script parameters when you have little control over how it's going to be called (some other pre-made thing is going to use spaces to separate an array of stuff and you can't change it). Getting rid of this would simplify it.

The 3 parameter sets the help shows is as intended, yes. The internal handling differs quite a lot depending on whether I need to get all items, filter by name, or fetch by id, so being able to test on $PSCmdlet.ParameterSetName makes the implementation simpler and easier to follow.
The config file is always needed, since that's where it fetches the data from.

ValueFromRemainingArguments is not strictly necessary, I just thought it would be neat if you could filter by multiple names or fetch multiple id's without worrying about commas. But it seems to be more trouble than it's worth, so I may as well scrap it.
Pipeline support also isn't strictly needed, at least not for the Name parameter. It seems most relevant for the Id parameter, although I'm not sure if there are any use cases for that either.

The most important case of the failed ones is A, and I don't understand why it fails. There should be exactly one possible call of the function that takes one positional parameter, and that's the "Name" parameter set.

nielsm
Jun 1, 2009



Swink posted:

I want to:

*Get a list of mailboxes
*If those mailboxes dont already have an Out of Office reply set,
*apply my generic out of office reply.

Parenthesize the command in the if statement and pull out the property from that:
code:
if ((Get-MailboxAutoReplyConfiguration -Identity $user.guid).AutoReplyState -eq Disabled) {
  ...
}

nielsm
Jun 1, 2009



If you're on PowerShell 3 or later, this part:
code:
Select -Property DisplayName | Format-Table -HideTableHeaders | Out-String
is done much simpler like this:
code:
ForEach-Object DisplayName
or, if you don't mind depending on default aliases:
code:
% DisplayName
In PS 3, the ForEach-Object command was extended with an alternate syntax that takes a single property or member function name and just pulls/calls that on each object.

The old version compatible syntax would be:
code:
ForEach-Object { $_.DisplayName }
That similarly pulls out the DisplayName property, and produces an array of string objects, one string for each input object.

Doing that allows you to simplify the following two lines to:
code:
$mailboxAlias = $oldMailboxes | ForEach-Object { (Get-Mailbox -Identity $_).Alias }
However, you can probably do the entire thing much cleaner and safer, to avoid depending on properties like DisplayName, which could theoretically cause multiple matches since it isn't really guaranteed unique. (What if you got a new employee on board who got exactly the same display name as someone who quit 7 months ago?)

How about this?
code:
filter script:Where-MailboxIsOld {
    Where-Object { ($_ | Get-MailboxStatistics).LastLogoffTime -le (Get-Date).AddMonths(-6) }
}

$oldMailboxes = Get-Mailbox -OrganizationalUnit "Former Employees" | Where-MailboxIsOld

$exportJob = $oldMailboxes | ForEach-Object { New-MailboxExportRequest -Mailbox $_.Identity -Filepath ("\\server\PST\" + $_.Alias + ".pst") }

nielsm
Jun 1, 2009



beepsandboops posted:

Interesting! I didn't know about filter script. I can't find much about it in Microsoft's official documentation though. Is it an alias for something else?

It's a filter function, placed in the "script" namespace.

The "filter" keyword introduces a filter function. A filter function is pretty much a shortcut to write a full function, that only has a Process block, and implicitly takes a single pipeline input, and produces pipeline output.
They're a simple way to wrap sub-operations you want to either name, or maybe re-use.

The "script:" part is a qualifier that ties to the "Where-MailboxIsOld" name. I'm honestly not entirely sure about it, but it's supposed to place the function in the script namespace, meaning it wouldn't get exported if you load the script as a module, or source it in an interactive session.

The rest of it is effectively a Process block inside a function, meaning the body of the filter function gets executed once for every item in the input pipeline.
Because of that, it should also be possible to write it with an if statement, that then only outputs the mailbox back out if it's sufficiently aged.


Something else is, consider having PS 4 or 5 on your workstation, and then set up a remoting session to your Exchange server. It's much more convenient, lets you work on Exchange through a local PS window, or even use the ISE, without having to open a remote desktop to another machine, or even having the Exchange management tools installed.

nielsm fucked around with this message at 22:28 on Jan 13, 2016

nielsm
Jun 1, 2009



I don't think there is. Instead, you can try matching the entire filename into three match groups, a prefix, the number, and a suffix, with an if ($x -match "re"). Then use the match groups to reconstruct the new filename and perform the rename.

nielsm
Jun 1, 2009



nm do the above

nielsm
Jun 1, 2009



FISHMANPET posted:

I think the author of the cmdlet has to write in support for -whatif, so that command just does it poorly. Are you having actual roubles executing the command, and/or is there a reason you're not using Disable-NetAdapterVmq?

Do that if it's all you need to do.

Apart from that, the parameter is called "-enabled".

I think -WhatIf is dependent on the implementation having confirmation checks. In WhatIf mode it'd then just print the step to be confirmed, and continue as if the user rejected.

nielsm fucked around with this message at 22:35 on Feb 4, 2016

nielsm
Jun 1, 2009



Judge Schnoopy posted:

I've got an odd powershell question that I can't seem to find an answer on.

When I run this locally on HostName:
Get-ChildItem HKCU:\Software\Microsoft\Windows\CurrentVersion\Uninstall | foreach-object {Get-ItemProperty $_.PsPath}

I get the exact results I'm looking for. When I try the same command from my computer using "Invoke-Command -computer HostName {*above command*}" it says the path does not exist.

Except I know it exists because I can run the same thing from the target computer and get my result. Is there a limitation with remote powershell that can't access this registry path? Or do I need to edit the HKCU path to specifically aim it at that computer?

HKCU (HKEY_CURRENT_USER) only really exists for interactive logon sessions. When you remote to another machine, which user is "current"?

If you want to enumerate Uninstall entries for local user accounts on another machine, your best bet is really to force the system user account service to load the profile. That should in turn cause the user hive to appear under HKEY_USERS (named after the account SID), where you might then be able to access it with remoting.

As for forcing the user account service to load a profile, remotely, no idea.

nielsm
Jun 1, 2009



Are you sure it's the same user hive you're looking at with both commands? Really sure?

nielsm
Jun 1, 2009



Judge Schnoopy posted:

...

gently caress

Thank you. Digging in to other subkeys I'm finding subtle differences in values.

So now I'm back to square 1 where I cannot check which of the 4 OneClick applications are installed on remote machines. I guess I could just shotgun the uninstall string and let it error out if the application isn't there.

Try, on the remote (I think it must be done on the remote at least) to set up HKEY_USERS as a PSDrive and then check what profiles are available there:

code:
New-PSDrive -Provider Registry -Root HKEY_USERS -Name HU
Get-ChildItem HU:\ -ErrorAction Ignore | where name -NotLike "*_Classes"
You'll want the Ignore error action since some of the profiles may just not be accessible for your user, and filter off the Classes hives too. Normal user accounts should then all have a SID of the form "S-1-5-21-*".

Just keep in mind this will only get you user hives for profiles that are loaded on that machine right now. Usually means you'll see them for system accounts, service accounts, whatever account you're remoting in with, and then any users that have logged locally on to the machine after last boot. (Profiles of logged off users tend to stay loaded, at least for a while.)

nielsm
Jun 1, 2009



Alternative solution, perhaps: Use the Start-Job and Wait-Job cmdlets to create a job for each computer on your list, and have them run in parallel.
Look up the about_Jobs help file to get some examples.

If you go that way, you should probably not use "echo" or similar direct output, but rather have the jobs return one or more objects with the result, which you can then collate and process afterwards.

nielsm
Jun 1, 2009



Yes you can just do string interpolation:
PHP code:
$site = read-host "Site to check"
$result = invoke-webrequest "http://downforeveryoneorjustme.com/$site"

nielsm
Jun 1, 2009



Judge Schnoopy posted:

So after writing a bunch of poo poo today I learned Forms cannot be passed to remote computers regardless of pssession or invoke command. Is best practice to create a secondary powershell or batch script and open that with invoke command? Or is there a better way to get a remote computer to display a message box with an input text box that writes to a file?

Also, if a computer has a default security setting to not allow powershell execution, will invoke command fail when trying to execute it? Or will invoke command somehow use my admin credentials to bypass the security?

You want the remote computer to display a message to and request input from whatever user is currently logged in on the console?

I believe you will need to somehow launch a new process for the user under their login session for that to work. Otherwise you'll probably run into session 0 isolation.
Assuming the account you use for remoting to the computer has local administrator, I think you can create a scheduled task that runs one time, as the currently logged in user, and then point it to some script file you've either copied to the machine, or on the network.

nielsm
Jun 1, 2009



The actual best solution I'd say, would be to use the Task Scheduler to define a task running interactive as the user and have it trigger on the appropriate event. That or PSEXEC are your primary choices for getting a program to run on an interactive desktop from remote.
Or if you do prefer to search through the event log with PowerShell (remotely?) and then trigger it to run from that, use New-ScheduledTask to create one that runs as his user, when he is logged on, right away or a few minutes later.

nielsm
Jun 1, 2009



Does anyone know of a convenient way to interactively pick an OU (to create some object in) from AD, in a script? I can probably write something myself to make a basic menu kind of thing, but maybe something already exists.

nielsm
Jun 1, 2009



Ended up writing a simple CLI menu function:
code:
function InteractivePickOU($ouBase, $favorites) {
    $currentList = $favorites | Get-ADOrganizationalUnit -Properties Name,CanonicalName,DistinguishedName
    $selected = $null

    while ($selected -eq $null) {
        Write-Host "Select an OU to place object in:"
        $i = 1
        $currentList | ForEach-Object {
            Write-Host "  $i. $($_.CanonicalName)"
            $i = $i + 1
        }
        $choice = Read-Host "Select an entry from the list, or type a search term"
        $choice = $choice.Trim();

        if ($choice -match "^\d+$") {
            $choice = [int] $choice
            if ($choice -ge 1 -and $choice -le $currentList.Length) {
                $selected = $currentList[$choice-1]
            } else {
                Write-Warning "Invalid choice"
            }
        } elseif ($choice -eq "") {
            Write-Warning "Invalid choice"
        } else {
            Write-Host
            Write-Host "Searching for OUs..."
            $searchTerm = ("*"+$choice+"*")
            $newList = Get-ADOrganizationalUnit -SearchBase $ouBase -Filter { Name -like $searchTerm } -Properties Name,CanonicalName,DistinguishedName
            $resultSize = $newList | Measure-Object | % Count

            if ($resultSize -eq 0) {
                Write-Warning "No results found for search"
            } elseif ($resultSize -ge 15) {
                Write-Warning "Found more than 15 results, truncating"
                $currentList = $newList | select -First 15
            } else {
                Write-Host "Found $resultSize results"
                $currentList = $newList
            }
        }

        Write-Host
    }

    return $selected
}

nielsm
Jun 1, 2009



slightpirate posted:

So good news is I've gotten my script to pull the testing OU into a CSV file, and I can edit it and push it back out to AD.

The bad news is, no matter what I change my $searchbase to, it only pulls from the Testing OU. Any ideas? I've bounced this off of different domain controllers, and ran the script off of a VM that's parked in our DC thinking maybe there was some VPN lag issues with running it from my laptop. I'm sort of at a loss here.

That code is just assigning some variables. Are you actually passing that to your Get-ADUser command as well?

nielsm
Jun 1, 2009



If you want to remove users from particular groups when disabling them, I'd do something like what sloshmonger suggests, but maybe turn it around a bit.

code:
$badgroups = @('CN=badgroup1,OU=groups,...', 'CN=badgroup2,OU=groups,...')
$users = import-csv users.csv
$users | foreach-object {
  $user = get-aduser $_.username -properties memberof
  $user.memberof | foreach-object {
    if ($badgroups -contains $_) {
      remove-adgroupmember -identity $_ -members $user.samaccountname
    }
  }
}
If you actually just want to remove every group from the user, just do the Remove-ADGroupMember unconditionally. Or you could have a list of "good" groups instead of "bad" ones.

There's really lots of ways to handle it.

nielsm
Jun 1, 2009



Get-ADUser is actually rather annoying re. error handling. When you use the form with -Identity (the default) it will throw an exception and not raise an error, when no object is found. Passing -ErrorAction does nothing about that.

You have two options:
1. Catch the exception with try..catch
2. Search with -Filter instead
code:
try {
  $u = Get-ADUser doesnotexist
}
catch {
  Write-Host "not found"
}
code:
$u = Get-ADUser -Filter { samaccountname -eq "doesnotexist" }
if (-not $u) { Write-host "not found" }

nielsm
Jun 1, 2009



Dr. Arbitrary posted:


$Userinput.Split(",") | ForEach-Object {$_.ToUpper}


If you have PS 3.0 or newer, ForEach-Object can also straight take a member name to extract or call. Like this:

$Userinput.Split(",") | ForEach-Object ToUpper

nielsm
Jun 1, 2009



slightpirate posted:

I know this is going to sound goofy as all hell, but none of my scripts that were exporting to CSV are working now. The file gets created fine, but no headers or data get dumped and no errors are generated in ISE.

Any ideas on what one-off command I typed into my console weeks ago that I forgot about?

Have you checked that the variables or pipelines you send to CSV actually contains any data?

Also try piping your data to ConvertTo-CSV instead of Export-CSV, that will get you text output to view in the console instead.

nielsm
Jun 1, 2009



$AllADUsers is what? Nothing gets assigned to that in what you pasted.

nielsm
Jun 1, 2009



slightpirate posted:

oh goddamnit.

code:
$AllADUsers = Get-ADUser -server $ADServer -searchbase $SearchBase -Filter * -Properties *
I must have manually mashed that in and didn't include it in the script. It's working fine now!

Thanks goons, sometimes a second set of eyes doesn't hurt.

When I debug my PowerShell scripts, I usually do it by pasting each command in manually and examining the output. When something uses a variable I examine its current value, and if it looks wrong, double check how that value got there.

Just being methodical like that finds 99% of all mistakes.

nielsm
Jun 1, 2009



sloshmonger posted:

Is the destination share "C:"? Maybe it should be \\computer\c$\destinationpath

I don't even think : is a valid character in share names.

nielsm
Jun 1, 2009



22 Eargesplitten posted:

If I call a powershell script from within a powershell script, does the calling script keep going, or does it wait for the second to complete? I want to make mine keep going.

If you just call it, it runs synchronously. If you want it to be asynchronous, you need to put it inside a Job. Get-Help Start-Job

nielsm
Jun 1, 2009



SeaborneClink posted:

I mistakenly posted this in the Enterprise thread, and received a workable answer but am still hoping for something a little more... clean.


Anythonypants (a true hero) offered the below, which does work, but breaks tables. Any other ideas?

The right way is the Known Folders API.
https://msdn.microsoft.com/en-us/library/windows/desktop/bb776911.aspx

On mobile right now so csn't type up an example.


Edit: As far as I can tell, the types for this isn't defined in .NET proper, so you have to declare the related COM interfaces and more to make it work.

nielsm fucked around with this message at 08:00 on Jul 8, 2016

nielsm
Jun 1, 2009



Here's a "PowerShell" way of calling that function to get a known folder path:

code:
Add-Type  @"
using System;
using System.Runtime.InteropServices;

public class KnownFolders
{
    [DllImport("shell32.dll")]
    static extern int SHGetKnownFolderPath(
        [MarshalAs(UnmanagedType.LPStruct)] Guid rfid,
        uint dwFlags,
        IntPtr hToken,
        out IntPtr pszPath  // API uses CoTaskMemAlloc
        );


    public static readonly Guid Desktop = new Guid( "B4BFCC3A-DB2C-424C-B029-7FE99A87C641" );
    public static readonly Guid Documents = new Guid( "FDD39AD0-238F-46AF-ADB4-6C85480369C7" );
    public static readonly Guid Downloads = new Guid( "374DE290-123F-4565-9164-39C4925E467B" );


    public static string GetByGuid(Guid knownfolderid)
    {
        IntPtr pPath;
        int hr = SHGetKnownFolderPath(knownfolderid, 0, IntPtr.Zero, out pPath);
        if (hr == 0)
        {
            string s = Marshal.PtrToStringUni(pPath);
            Marshal.FreeCoTaskMem(pPath);
            return s;
        }
        else
        {
            Marshal.ThrowExceptionForHR(hr);
            return null;
        }
    }
}
"@

[KnownFolders]::GetByGuid([KnownFolders]::Downloads)
There's a more complete list of known folder GUIDs here: http://pinvoke.net/default.aspx/shell32.SHGetKnownFolderPath

nielsm fucked around with this message at 09:25 on Jul 8, 2016

nielsm
Jun 1, 2009



More like this:
code:
$jobObject = Start-Job {
    Copy-Item $src $dst
    whatever-script.ps1
}
Or if you don't have a particular reason to have that script as a separate file, you can just put its contents right in the job script block.

nielsm
Jun 1, 2009



Your code is an unreadable mess.
Indent your poo poo properly, and stop making gigantic pipelines when writing scripts you intend to re-use. Gigantic long pipes mainly have use when experimenting on the commandline, and even then storing stuff into variables as intermediate steps makes it much easier to figure out where something might have gone wrong.

Here's a possibly fixed version:
code:
Import-Module activedirectory
#variable set
$ConfirmPreference = 'None'

#find users, move them to the disabled OU and disable the account. **SET DATE AND INITIALS IN DESCRIPTION**
$users = Import-Csv -path "C:\powershell\audit\auditprod2.csv"

$users = $users | Foreach {
  $u = Get-ADUser $_.SamAccountName
  $u = $u | Move-ADObject -TargetPath "OU=Disabled_Accounts Pending Deletion,OU=ACCOUNTS,DC=loldomain,DC=com" -PassThru
  $u = $u | Disable-ADAccount -passthru
  $u = $u | Set-ADUser -server "DCA" -Description "07/13/16 ABC" -passthru

  #removes users from AD groups. May warn about primary group (domain users) can ignore for now. Find way to exclude this group in v2.
  Get-ADPrincipalGroupMembership -Identity $u.distinguishedname | % {
    Remove-ADPrincipalGroupMembership -Identity $Users -MemberOf $_ -Confirm:$False
  }
}
Note that I do the group removal on each individual user, rather than all the users as one.

nielsm
Jun 1, 2009



Oh right, made a mistake editing it. The line should be:

Remove-ADPrincipalGroupMembership -Identity $u.distinguishedname -MemberOf $_ -Confirm:$False

nielsm
Jun 1, 2009



Which part of your script is actually supposed to invoke the commands to run on the remote machine?

That line starting a new Powershell.exe process just starts a process on your own machine.

Do your target machines have WinRM enabled? If so, use Invoke-Command (with -AsJob flag) instead of Start-Job to just run PowerShell code directly on each target machine.

If your target machine don't have WinRM enabled, or its execution policy doesn't allow remote scripting, you'll have to use some other trickery to get your code to run on it. Either the old psexec utility, or setting up a task scheduler job on it.

nielsm
Jun 1, 2009



You're missing my main point: Nothing in that code ever tells the remote computer to execute anything.

You execute PowerShell.exe on your local computer, reading a script from a remote location. The script may be read from a remote location, but it still executes on your local computer.
If you want to execute PowerShell code on a remote computer you need to use either Invoke-Command or Import-PSSession, or trickery with New-ScheduledTask or PSExec.

nielsm
Jun 1, 2009



Can you use classic Windows remote management with the target machines? E.g. open the Computer Management management console and connect to one of the machines, see its Device Manager etc.?

If so, I would suggest something like this:

InstallInstaller.ps1 (runs on your computer)
code:
$targets = import-csv "computers.csv"
$MyScriptsFolder = "C:\MyScripts\"
$PerformInstallScriptName = "PerformInstall.ps1"

$targets | foreach-object {
    $TargetShare = "\\$($_.computername)\C`$\Temp\"
    Copy-Item "$MyScriptsFolder$PerformInstallScriptName" "$TargetShare$PerformInstallScriptName"

    $cimsession = New-CimSession $_.computername
    $TaskAction = New-ScheduledTaskAction "PowerShell.exe" "-executionpolicy bypass -noprofile -noninteractive -file `"C:\Temp\$PerformInstallScriptNAme`"" -CimSession $cimsession
    $TaskTrigger = New-ScheduledTaskTrigger -at (Get-Date).AddMinutes(1) -randomdelay (New-TimeSpan -minutes 10) -once -CimSession $cimsession
    New-ScheduledTask -Action @($TaskAction) -Trigger @($TaskTrigger) -CimSession $cimsession
    Remove-CimSession $cimsession
}
PerformInstall.ps1 (runs on target computer)
code:
$InstallerSource = "\\server\share\MyApp"
$InstallerDest = "C:\Temp\MyAppInstaller\"
Copy-Item $InstallerSource $InstallerDest -Recurse

$($InstallerDest)\setup.exe -silent
This is completely untested, but it's how I would think it can be done, without relying on psexec. The PerformInstall script should run as your user on the target machine, since you created the scheduled task. You should be able to change that by passing extra credentials around, possibly also creating the CimSession as the other user.


Edit: Thinking about it, if you can't establish a CimSession like this, PSExec won't work either, since PSExec also depends on being able to connect to the service control manager of the remote computer to do its job.

nielsm fucked around with this message at 19:55 on Jul 15, 2016

nielsm
Jun 1, 2009



I'd guess it has to do with escaping there. Try with a simpler command first, like Write-Host, make sure you can get that working.

I think you must pass the -Command argument name to PowerShell.exe when giving it something to execute right away:
psexec $computer powershell.exe -command { write-host test }

There is also the option for -Base64EncodedCommand, which even has an example if you call Powershell.exe -Help.

But really, it's probably safer to copy a script file or batch file to the remote and exec that, have the script figure out all its data on its own instead of messing around with escaping.

nielsm
Jun 1, 2009



That impersonation problem, can that perhaps be related to an issue I was having the other day?
I was trying to run a GUI program as another user, using Start-Command -Credential (Get-Credential), loading the program from a network location given by full UNC path. I kept getting file not found errors, loading the program. Passing -LoadUserProfile made no difference.
Doing the same thing with classic RunAs.ex works just fine.

Adbot
ADBOT LOVES YOU

nielsm
Jun 1, 2009



Why are you trying to PSExec something any more complicated than the name of a batch file?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply