Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
peak debt
Mar 11, 2001
b& :(
Nap Ghost

Erwin posted:

This:
code:
ForEach ($Group in $FilesToZip) 
{
	Write-Host $Group;
}
Outputs this:
code:
Microsoft.PowerShell.Commands.GroupInfo
...for each group that exists.

If you want to see what you can do with a certain object, use get-member. In your case

code:
ForEach ($Group in $FilesToZip) 
{
	$Group | get-member
}
You will see that the class has a member "Group" that's an array of Strings containing the group's members.

Adbot
ADBOT LOVES YOU

Mierdaan
Sep 14, 2004

Pillbug
Why does this work
code:
get-aduser -filter * -properties name,passwordexpired | where-object {$_.passwordexpired -eq $true}
but this does not?
code:
get-aduser -properties name,passwordexpired -filter {passwordexpired -eq $true}
And why does this return all aduser objects, even ones whose password are legitimately expired?:
code:
get-aduser -properties name,passwordexpired -filter {passwordexpired -eq $false}

gbeck
Jul 15, 2005
I can RIS that

Mierdaan posted:

Why does this work
code:
get-aduser -filter * -properties name,passwordexpired | where-object {$_.passwordexpired -eq $true}
but this does not?
code:
get-aduser -properties name,passwordexpired -filter {passwordexpired -eq $true}
And why does this return all aduser objects, even ones whose password are legitimately expired?:
code:
get-aduser -properties name,passwordexpired -filter {passwordexpired -eq $false}

The filter property on Get-ADUser is an LDAP filter. There isn't actually an LDAP attribute of 'passwordexpired'. Just using LDAP you have to calculate the max password age for the domain then figure it out from the 'pwdlastset' attribute.

-Dethstryk-
Oct 20, 2000
Does anyone know the optimal (if any) way I could store date/time information in Excel's date-time code, so that when I write out CSV's that for log script utilities I can just open them up and Excel can easily know what they are? If I didn't need the time I could just import the data pretty easily, but I can't figure out a way to import the time part of it.

Edit: Just so I'm clear, the format I'm referencing is Excel's that looks like this: 41284.7083333333 is January 10, 2013 5:00pm.

-Dethstryk- fucked around with this message at 00:23 on Jan 11, 2013

stubblyhead
Sep 13, 2007

That is treason, Johnny!

Fun Shoe

-Dethstryk- posted:

Does anyone know the optimal (if any) way I could store date/time information in Excel's date-time code, so that when I write out CSV's that for log script utilities I can just open them up and Excel can easily know what they are? If I didn't need the time I could just import the data pretty easily, but I can't figure out a way to import the time part of it.

Edit: Just so I'm clear, the format I'm referencing is Excel's that looks like this: 41284.7083333333 is January 10, 2013 5:00pm.

If you have it as a DateTime object you can use the ToOADDate() method to convert it.

code:
PS > $now = get-date

PS > $now.ToOADate()
41284.7292422801

Powdered Toast Man
Jan 25, 2005

TOAST-A-RIFIC!!!
I have a series of commands that I'd like to run sequentially by scheduling a script file to run on a server at a specific time. Is there anything I need to do to make them run this way (first one runs, second one runs after the first is finished, etc) aside from just putting them in a PS1 file sequentially?

(it's PS1 because it's an Exchange 2007 server, I'm scheduling some mailbox moves after hours so I don't have to loving wake up at 2 AM just to start a Powershell command)

Mierdaan
Sep 14, 2004

Pillbug
No, that's pretty much how PowerShell scripts work. Just make sure you're accounting for your expected amount of baditems - it sucks to schedule a bunch of mailbox moves overnight and wake up to find they all failed due to hitting a single baditem (the default BadItemLimit in Exchange 2007's move-mailbox is 0).

edit: I know I keep harping on upgrading, but seriously, mailbox moves are so much nicer in 2010/2013.

Mierdaan fucked around with this message at 17:51 on Jan 17, 2013

AreWeDrunkYet
Jul 8, 2006

peak debt posted:

import-csv creates rows of columns. If you do a foreach over them, you still get an array of columns for each loop, even if it is a one-member array, and get-aduser expects a string.

You either have to:
$users = get-aduser -filter * -searchbase $variable.ColumnName -properties DisplayName,distinguishedname,description,mail

or use
foreach ($variable in (get-content variables.csv))
to loop over each line as string

That was the issue, thanks. Always the little things that catch you.

Powdered Toast Man
Jan 25, 2005

TOAST-A-RIFIC!!!

Mierdaan posted:

No, that's pretty much how PowerShell scripts work. Just make sure you're accounting for your expected amount of baditems - it sucks to schedule a bunch of mailbox moves overnight and wake up to find they all failed due to hitting a single baditem (the default BadItemLimit in Exchange 2007's move-mailbox is 0).

edit: I know I keep harping on upgrading, but seriously, mailbox moves are so much nicer in 2010/2013.

Upgrading is unfortunately not in the budget. I brought it up again and was told that the inconveniences (such as offline mailbox moves) of 2007 only affect me, therefore I could basically eat a dick. They don't give a gently caress if I had to do poo poo in the middle of the night.

adaz
Mar 7, 2009

Powdered Toast Man posted:

I have a series of commands that I'd like to run sequentially by scheduling a script file to run on a server at a specific time. Is there anything I need to do to make them run this way (first one runs, second one runs after the first is finished, etc) aside from just putting them in a PS1 file sequentially?

(it's PS1 because it's an Exchange 2007 server, I'm scheduling some mailbox moves after hours so I don't have to loving wake up at 2 AM just to start a Powershell command)

Powershell workflows (3.0) are an interesting concept which might match up to your needs. Decent scripting guy on the basics of them: https://blogs.technet.com/b/heyscriptingguy/archive/2012/12/26/powershell-workflows-the-basics.aspx?Redirected=true

Jelmylicious
Dec 6, 2007
Buy Dr. Quack's miracle juice! Now with patented H-twenty!
If you just put c:\script.ps1 in the task scheduler, it will execute it with the default associationg for ps1. So, your server will be very happy to oblige and just open notepad for you. Make sure you add powershell -file 'c:\script.ps1' to have it actually execute. Also, don't forget to put plenty of logging in the script, since you won't be monitoring it.

See this for an example task: http://blogs.technet.com/b/heyscrip...ell-script.aspx

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
Hello my powershell expert friends, I have a question:

I'm doing some deployment automation using TFS and Powershell. The process is as follows:
TFS builds the app, drops it somewhere, and then uses PS remoting to execute a script on the deployment target using the TFS build service account. The account has local admin access on the target server.

One of the steps is to install some prerequisites if they're not present. For example, .NET 4.0. There are some silent install options, but no matter what, it prompts for UAC elevation. I can't disable UAC on these machines. It works okay if I make the TFS build service account a domain admin, but that's obviously not a valid solution either. I've come up with nothing that works on Google so far.

Any ideas?

Note: I'm aware that this entire thing is stupid, and that a far more sane approach would be to build an image with all of the requirements preinstalled and then use that image in the event that a new environment is ever added. For incomprehensible reasons, that is not an acceptable solution. I've spent far more time trying to find a workaround than it would have to manually build out the client's 6 environments.

adaz
Mar 7, 2009

domain admin likely works because you have a GPO that disables UAC for your domain admins (guessing, what makes sense).

As far as I know the only way around UAC prompts is to disable UAC. Either temporarily or permanently, that's kind of the whole point of it. There are ways to temporarily disable it and re-enable it using various regkeys that won't prompt for elevation. They are essentially security holes but it's possible.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

adaz posted:

domain admin likely works because you have a GPO that disables UAC for your domain admins (guessing, what makes sense).

As far as I know the only way around UAC prompts is to disable UAC. Either temporarily or permanently, that's kind of the whole point of it. There are ways to temporarily disable it and re-enable it using various regkeys that won't prompt for elevation. They are essentially security holes but it's possible.

Any references on the registry hack path? I tried the LocalAccountTokenFilterPolicy one, with no success.

adaz
Mar 7, 2009

Ithaqua posted:

Any references on the registry hack path? I tried the LocalAccountTokenFilterPolicy one, with no success.

Let me check with our application packager folks tomorrow, I don't know which one they have used/use.

gbeck
Jul 15, 2005
I can RIS that
You could use the task scheduler to run the install with high privileges. You can use schtasks to create the task from the command line, then schtasks /run to execute it. For some odd reason I am thinking you might have to import the job from an XML file to get the "high privileges" enabled but that isn't really a big deal.

adaz
Mar 7, 2009

Checked with my buddy on the package team on this, apparently it's not an issue for our SCCM installs because we're doing what this article says to do:

http://csi-windows.com/blog/all/27-csi-news-general/335-how-to-silence-the-uac-prompt-for-msi-packages-for-non-admins

And then on certain rare occasions where that won't work one of our SCCM install accounts has UAC disabled via GPO. Probably not of much help, sorry man.

adaz fucked around with this message at 21:06 on Jan 18, 2013

Powdered Toast Man
Jan 25, 2005

TOAST-A-RIFIC!!!

Jelmylicious posted:

If you just put c:\script.ps1 in the task scheduler, it will execute it with the default associationg for ps1. So, your server will be very happy to oblige and just open notepad for you. Make sure you add powershell -file 'c:\script.ps1' to have it actually execute. Also, don't forget to put plenty of logging in the script, since you won't be monitoring it.

See this for an example task: http://blogs.technet.com/b/heyscrip...ell-script.aspx

:ughh:

I didn't figure this out until the next day, when the script didn't run as scheduled. Sigh.

-Dethstryk-
Oct 20, 2000

stubblyhead posted:

If you have it as a DateTime object you can use the ToOADDate() method to convert it.

I'll be damned, that was way easier than I expected. Thank you so much.

I've been learning more and more with PowerShell, and just the utilities/scripts I've been able to put into play already makes me happy.

capitalcomma
Sep 9, 2001

A grim bloody fable, with an unhappy bloody end.
I feel like such a noob right now: just dipping my toes in to Powershell and I'm already stuck.

I'm trying to poll information about Java installs on a group of PC's, but I cannot get PS to contact the workstations with this script. When I run the code, I get "The RPC server is unavailable."

code:
$computerlist = Get-ADcomputer -Filter * -SearchBase "(OU full of computers)" | Select-Object Name
$computerlist | ForEach-Object { Get-WmiObject win32_product -ComputerName $_ -Filter "name like '%java%'" }
If I actually look at $computerlist it's full of computer names, all of them valid. I can manually run the bracketed command, with one of the computer names:


code:
Get-WmiObject win32_product -ComputerName %COMPUTER% -Filter "name like '%java%'"
...and it runs fine. The computer responds and produces a list of any Java products installed on the PC.

I can't for the life of me figure it out what I'm doing wrong. I've tried storing the list of computers in a text file and cat'ing it, and I get the same error. Am I formatting the list wrong? What am I missing?

capitalcomma
Sep 9, 2001

A grim bloody fable, with an unhappy bloody end.

Welp, fixed it, moments after posting a help request.

But I'm confused on why I needed to do it the way I did it. I ended up needing to change how the names were stored:

code:
| Select-Object Name
Was removed from the first line, and I added in:

code:
$_.Name
In the -ComputerName parameter on the second line.

I'm confused. With my original script, wasn't I storing the names properly? With "Select-Object Name", wasn't the script selecting out the list of names as strings, that could be fed to Get-WmiObject?

capitalcomma fucked around with this message at 21:46 on Jan 25, 2013

Mierdaan
Sep 14, 2004

Pillbug

Sounder posted:

code:
| Select-Object Name

This should be your hint here. What you were doing was returning a list of objects, not strings. Those objects have a property named "name" which is a string, as well as some methods (e.g. ToString, GetHashCode) You can check that by

code:
Get-ADcomputer COMPUTERNAME | Select-Object Name | get-member
When you explicitly use $_.name, you're pulling out the string property you actually care about, rather than an object. In the second line, you were trying to pass objects to Get-WMIObject's computername parameter, rather than the string values it expected.

Mierdaan fucked around with this message at 21:44 on Jan 25, 2013

AreWeDrunkYet
Jul 8, 2006

How do I open an explorer window to a drive on a network path? That is, let's say I want a script to pop up an explorer window defaulting to \\COMPUTERNAME\c$.

I found this from searching around:
code:
$path = "\\COMPUTERNAME\c$"
invoke-expression "explorer '/select,$path'"
The problem is that no matter how I run this, I always just get an explorer window pointing to \\COMPUTERNAME.

I've tried:
code:
$path = "`\`\COMPUTERNAME`\c`$"
$path = "\\COMPUTERNAME\c`$"
$path = "\\COMPUTERNAME\c`$\"
And every other variation of that path I could think of in the hopes of making it parse correctly. Every time, it just goes to \\COMPUTERNAME. What am I doing wrong?

capitalcomma
Sep 9, 2001

A grim bloody fable, with an unhappy bloody end.

Mierdaan posted:

When you explicitly use $_.name, you're pulling out the string property you actually care about, rather than an object. In the second line, you were trying to pass objects to Get-WMIObject's computername parameter, rather than the string values it expected.

Aaaah, I had it rear end-backwards then. I thought my first method pulled strings and the other method pulled objects. Thank you for clarifying. I obviously have a lot more reading to do.

ZeitGeits
Jun 20, 2006
Too much time....

AreWeDrunkYet posted:

How do I open an explorer window to a drive on a network path? That is, let's say I want a script to pop up an explorer window defaulting to \\COMPUTERNAME\c$.

I found this from searching around:
code:
$path = "\\COMPUTERNAME\c$"
invoke-expression "explorer '/select,$path'"
The problem is that no matter how I run this, I always just get an explorer window pointing to \\COMPUTERNAME.

I've tried:
code:
$path = "`\`\COMPUTERNAME`\c`$"
$path = "\\COMPUTERNAME\c`$"
$path = "\\COMPUTERNAME\c`$\"
And every other variation of that path I could think of in the hopes of making it parse correctly. Every time, it just goes to \\COMPUTERNAME. What am I doing wrong?

code:
$path = "\\localhost\c$"
invoke-expression "explorer $path"
This code works for me. Am I misunderstanding your question?

AreWeDrunkYet
Jul 8, 2006

ZeitGeits posted:

code:
$path = "\\localhost\c$"
invoke-expression "explorer $path"
This code works for me. Am I misunderstanding your question?

Weird, I tried it your way, and (just using the command prompt for simplicity's sake here)

code:
explorer "\\localhost\c$"
works just fine, while

code:
explorer /select,"\\localhost\c$"
just brings up \\localhost.

No idea why, but at least the workaround is simple. Thanks.

e: On closer inspection, the command I was looking for was

code:
explorer /root,"\\localhost\c$"
which also works just fine. Misinterpreted what the /select switch does.

gbeck
Jul 15, 2005
I can RIS that
You can also use Invoke-Item to open a folder.
code:
Invoke-Item "\\localhost\c$"
ii "\\localhost\c$"

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

I'm trying to import output from a program into PowerShell as an array. Right now it spits information out into a colon deliminated list like this:

code:
Virtual Drive: 1 (Target Id: 1)
Name                :VD_1
RAID Level          : Primary-5, Secondary-0, RAID Level Qualifier-3
Size                : 1.086 TB
Parity Size         : 278.464 GB
State               : Optimal
Strip Size          : 128 KB
Number Of Drives    : 5
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy   : Disk's Default
Encryption Type     : None
PI type: No PI
What I'd like to do is read that in as an array with each line being two colums in the array, so that I could call the data as such: $virtualdisk1.State would return Optimal. I think a hash table or something would be ideal, but I just can't get my head around it. The other issue is this is an except from a much larger output. Here is my code where I locate it:

code:
$srvVDDBInfoRet = Select-String -path $srvDiskInfo -pattern $srvVDDBSrchStrng -context 0,18 
Where $srvDiskInfo is just c:\logs\disklog.txt. anyone have any idea? :(

adaz
Mar 7, 2009

text parsing is always a bitch but for those lines something like this would work probably. Hard to tell without seeing the whole long glob of stuff you have!


code:
$formatTable = @{}
foreach($line in $strlines) {
    $splitLine = $line.split(":")

    # this returns us an array w/ two elements (probably, but we'd better code for more!) So our example line will be
    # Virtual Drive: 1 (Target Id: 1)
    #
    # we now have splitLine as an array of 4 elements:
    # 0 Virtual Drive
    # 1 1
    # 2  (Target ID
    # 3 1)
    #
    # a more typical line would be Name                :VD_1
    # which would give us this array:
    # 0  Name
    # 1  VD_1

    if($splitLine -is [system.array]) {
       # first element is always going to be our key, I assume you don't want the white space so we trim that out and add it.
       $key = $splitLine[0].fulltrim
       # make sure we aren't trying to add an empty key which will error out on us...
       if($key -ne [string]::empty) {
           $value = ""
           # add back in the : we stripped.
           for($I=1;$I -lt $splitLine.count;$I++) {
           $value = $value + ":"
       }
       $formatTable.add($key,$value)
       }
    }else {
      # Line with no : ! Unable to process, probably better write out an error or something here.
      write-host "Unable to process the line $line no : found in input string!"
    }
}

vanity slug
Jul 20, 2010

e: Identity is not Identify. Oops.

vanity slug fucked around with this message at 10:41 on Jan 29, 2013

Jethro
Jun 1, 2000

I was raised on the dairy, Bitch!

adaz posted:

first solution

String.Split has a count parameter which you can use to limit the number of elements in the array. Also, if you use the -split operator (PS 2.0+), you can use regexes.
code:
$formatTable = @{}
foreach($line in $strlines) {
    $splitLine = $line -split "\s*:\s*", 2
    # this returns an array with a maximum of 2 elements.
    # Also, whitespace on either side of the : is discarded. 

    if($splitLine -is [system.array] -and $splitLine.Count = 2) {
       # first element is always going to be our key, I assume you don't want the white space so we trim that out and add it.
       # Still using fulltrim in case the key has leading whitespace
       $key = $splitLine[0].fulltrim
       $value = $splitLine[1]
       # make sure we aren't trying to add an empty key which will error out on us...
       if($key -ne [string]::empty) {
           $formatTable.add($key,$value)
       }
    }else {
      # Line with no : ! Unable to process, probably better write out an error or something here.
      write-host "Unable to process the line $line no : found in input string!"
    }
}

adaz
Mar 7, 2009

Jethro posted:

String.Split has a count parameter which you can use to limit the number of elements in the array. Also, if you use the -split operator (PS 2.0+), you can use regexes.


:psyduck: how did I never look at the overloads list for split. good lord, thanks!

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

That is beautiful. Here is the full log file I'm trying to parse: http://pastebin.com/cSLYgyk9 It's basically a dump of a raid controller's settings that I get by calling the raid controllers cli utility.

So rather than the cumbersome way I was doing it, is there an easier way to just parse the log file for the virtual disks and make each one an array?

-Dethstryk-
Oct 20, 2000
This is probably simple, but I'm still new.

I added the ability to use an XML configuration for a script I'm working on to ease deployment. The problem is that when I run the script using powershell.exe -file, either through task scheduler or run, it sets the working directory to C:\ instead of the folder the script is in. If I run the script from the ISE or from the right-click context menu, it uses the script's directory.

Basically, when I run the script and pull the config file even with .\, it defaults to looking at it on the C: root.

I was able to get around this after some Googling by adding a WindowsPowerShell directory to the user profile documents folder, with a profile.ps1 file that included a Set-Location switch. That makes everything work, but is there something I am missing here?

The obvious solution to me would be to put the script directory in the config file, but now I'm just curious about why this is happening.

Edit: Oh yeah. I can't put the script directory in the config file when I can't read the config file in the first place. I'm dumb.

-Dethstryk- fucked around with this message at 04:05 on Feb 2, 2013

Titan Coeus
Jul 30, 2007

check out my horn

-Dethstryk- posted:

This is probably simple, but I'm still new.

I added the ability to use an XML configuration for a script I'm working on to ease deployment. The problem is that when I run the script using powershell.exe -file, either through task scheduler or run, it sets the working directory to C:\ instead of the folder the script is in. If I run the script from the ISE or from the right-click context menu, it uses the script's directory.

Basically, when I run the script and pull the config file even with .\, it defaults to looking at it on the C: root.

I was able to get around this after some Googling by adding a WindowsPowerShell directory to the user profile documents folder, with a profile.ps1 file that included a Set-Location switch. That makes everything work, but is there something I am missing here?

The obvious solution to me would be to put the script directory in the config file, but now I'm just curious about why this is happening.

Edit: Oh yeah. I can't put the script directory in the config file when I can't read the config file in the first place. I'm dumb.

Look into using the variable $MyInvocation and the property $MyInvocation.MyCommand.Path (type literally that). Long stackoverflow answer: http://stackoverflow.com/a/6985381/965648

Fruit Smoothies
Mar 28, 2004

The bat with a ZING
I'm playing with the PS 3 Schedule Task functions and trying to get a task to start initially on startup, and then every 15 mins indefinitely after that. However, the New-ScheduledTaskTrigger is piss poor and this seems impossible.

Current code:

code:
$taskAction = New-ScheduledTaskAction -Execute "PowerShell.exe" -Argument ("-nologo -noprofile -noninteractive -file '" + $folder + "/worker1.ps1'");
$taskTrigger = New-ScheduledTaskTrigger -AtStartup;
$taskTrigger2 = New-ScheduledTaskTrigger -RepetitionInterval (New-TimeSpan -Minutes 15) -RepetitionDuration (New-TimeSpan -Days 100);
$task = Register-ScheduledTask -TaskName "TaskNameTest" -Trigger $taskTrigger -Action $taskAction -User "NT AUTHORITY\SYSTEM" -RunLevel Highest;
Set-ScheduledTask TaskNameTest -Trigger $taskTrigger2;
But I have also tried the methods outlined here: http://stackoverflow.com/questions/12768769/powershell-v3-new-jobtrigger-daily-with-repetition


Please don't make me use schtasks :suicide:

EDIT: gently caress this, I just made schtasks import the XML file. :munch:

Fruit Smoothies fucked around with this message at 21:27 on Feb 3, 2013

-Dethstryk-
Oct 20, 2000

Titan Coeus posted:

Look into using the variable $MyInvocation and the property $MyInvocation.MyCommand.Path (type literally that). Long stackoverflow answer: http://stackoverflow.com/a/6985381/965648

Thank you so much. That solved the problem exactly.

capitalcomma
Sep 9, 2001

A grim bloody fable, with an unhappy bloody end.
Apologies as this is more of a sysadmin question than a programming question, but...how do you guys implement/deploy/distribute your scripts and functions?

I'm writing up a script that will create user accounts. It basically just automates stuff like creating the user account (along with all of the group memberships, etc.), setting up profile folders, mailboxes, stuff like that. It's coming along, and I'd like to make it accessible to the rest of the IT department.

Do you recommend deploying it via Group Policy? Maybe put it in a default path on their machines? A DFS share? I don't have much experience distributing code like this.

capitalcomma fucked around with this message at 23:22 on Feb 5, 2013

stubblyhead
Sep 13, 2007

That is treason, Johnny!

Fun Shoe
How would you guys handle this kind of situation? I want to randomly arrange a collection. I can do it like this:
code:
$random = Get-Random -InputObject $myCollection -Count $myCollection.Length
but it doesn't work if $myCollection only has a single element. I'm not sure if this is how the PowerCLI cmdlets are written or if it's just how Powershell works, but the Get-Cluster cmdlet will return a single object if there's only one that matches the criteria, and a collection of them if there's multiple. The Cluster object doesn't have a Length property, so Get-Random thrown an error because the parameter gets passed a null value. I'd prefer not to do a check on the length before doing this if possible. Is there any way I can make it handle a single object as a collection with length 1, or is there just a better way to approach this altogether?

Adbot
ADBOT LOVES YOU

adaz
Mar 7, 2009

stubblyhead posted:

How would you guys handle this kind of situation? I want to randomly arrange a collection. I can do it like this:
code:
$random = Get-Random -InputObject $myCollection -Count $myCollection.Length
but it doesn't work if $myCollection only has a single element. I'm not sure if this is how the PowerCLI cmdlets are written or if it's just how Powershell works, but the Get-Cluster cmdlet will return a single object if there's only one that matches the criteria, and a collection of them if there's multiple. The Cluster object doesn't have a Length property, so Get-Random thrown an error because the parameter gets passed a null value. I'd prefer not to do a check on the length before doing this if possible. Is there any way I can make it handle a single object as a collection with length 1, or is there just a better way to approach this altogether?

Believe this will work, as long as whatever you are checking implements the collection interface for array (Which 99.99999999% should) it'll be OK

code:
if($random -is [system.array]) {

}else {


}

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply