Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Also don't curl, use Invoke-RestMethod or Invoke-WebRequest if you can't use Invoke-RestMethod.

And Powershell can be weird with nested JSON, by default ConverTo-Json assumes 2 levels of bojects, but there is a depth flag to the cmdlet if your JSON is more nested than that.

Adbot
ADBOT LOVES YOU

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Assuming you're talking about this module: https://github.com/nblagoev/Gmail.ps
It actually includes some code to read a credential from the Credential manager. It looks like if you create a "generic" type credential under the "Windows Credential" store in Credential manager, and set the address to "mail.ps:default" it'll grab that credential by default every time if you don't specify a credential on the command line. So that's a safe way to save your credentials.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Trying to construct a function with some parameter sets, not sure if what I want to do is actually possible. This is a "get" function. By default I want it to get "all" the parameters, but I want to allow using -AllFields:$false to only get "default" values. I ALSO want to, in a separate parameter set, allow a -Fields parameter where the user can specify which fields they want. It wouldn't make sense to pass AllFields and Fields in the same command so I'd like to use Parameter Sets to prevent that... So I wrote this but it doesn't seem very elegant. Checking if AllFields was specified then acting on it, else checking if fields was specified and acting on that, else building the default URL. And using a DefaultParameterSetName that doesn't exit.
code:
function Get-GFilePermissions
{
    [CmdletBinding(DefaultParameterSetName = 'Default')]
    Param
    (
        [Parameter(Mandatory)]
        [string]$accessToken,

        #[Alias("spreadSheetID")]
        [Parameter(Mandatory)]
        [string]$fileID,

        # Parameter help description
        [Parameter(Mandatory,ParameterSetName="All")]
        [switch]$allFields,

        # Parameter help description
        [Parameter(Mandatory,ParameterSetName="Specify")]
        [string]
        $fields
    )

    Begin
    {
        if ($PSBoundParameters.ContainsKey("AllFields")) {
            if ($allFields) {
                $uri = "https://www.googleapis.com/drive/v3/files/$fileID/permissions/?fields=*"
            } else {
                $uri = "https://www.googleapis.com/drive/v3/files/$fileID/permissions/"
            }
        } elseif ($PSBoundParameters.ContainsKey("fields")) {
            $uri = "https://www.googleapis.com/drive/v3/files/$fileID/permissions/?fields=$fields"
        } else {
            $uri = "https://www.googleapis.com/drive/v3/files/$fileID/permissions/?fields=*"
        }
        $headers = @{"Authorization"="Bearer $accessToken"}
    }

    Process
    {
        Invoke-RestMethod -Method Get -Uri $uri -Headers $headers
    }
    End{}
}
I guess I could also do a switch statement on $PSCmdlet.ParameterSetName but that doesn't change the meat of the problem which is that I'm having to define my "default" twice. I also don't know which is better coding practice, what I've got above or switch on ParameterSetName.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Mario posted:

$allFields is a mandatory switch, so it will never be $false when invoked with the "All" parameter set(e: nope, can be forced to false with -allFields:$false, but it seems awkward to do that. Still you can get this "switch always true" behavior with a validation set). Maybe have a third parameter set "Default"?

Maybe I'm not quite understanding, because if you set the ValidateSet to only $True then you can't do -param:$false. Now granted, setting switches to true by default is apparently against best practice so I guess it's kind of weird that I'm asking "what's the best way to follow best practices when I'm violating best practices."

The reason I'm doing this is my team has a publicly available Powershell Module to interact with data from Google Sheets: https://github.com/umn-microsoft-automation/UMN-Google but I'm sure very few, if any, people are using it. And the module I'm working on, Get-GFilePermissions, is currently so useless that even if you're using the module, you're probably not using that function. But it's there, and it does something and so it's possible that someone is using it. So I'm changing the function to be slightly smarter, and debating how much I want to preserve the current format of the data that it returns vs improving it to be actually useful (but making it easy for someone to fix their code if they want to keep using it the old dumb way?).

I'm most assuredly thinking way too hard about this dumb cmdlet because I could make it work and make it pull all the data by adding 9 characters but damnit I wanna do it right!

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I decided I was probably overthinking this, and also not really keeping in line with the philosophy of the rest of the modules, so I went with this. Generally speaking, with the rest of the cmdlets, we'll grab everything and return it to you and let you filter it locally rather than trying to construct a REST call to get only what you specify. So if you only want some of the fields, just get them all and select-object those properties.
By default it will append /?fields=* to the URI, unless you specify -Default fields in which case it will leave that out. It also lets you specify a PermissionID which gets you one specific permission object rather than all permissions for a file.
code:
function Get-GFilePermissions
{
    [CmdletBinding()]
    Param
    (
        [Parameter(Mandatory)]
        [string]$accessToken,

        #[Alias("spreadSheetID")]
        [Parameter(Mandatory)]
        [string]$fileID,

        # Parameter help description
        [Parameter()]
        [string]
        $permissionID,

        # Parameter help description
        [Parameter()]
        [switch]
        $DefaultFields
    )

    Begin
    {
        $uri = "https://www.googleapis.com/drive/v3/files/$fileID/permissions"
        if ($permissionID) {
            $uri += "/$permissionID"
        }
        if (-not $DefaultFields) {
            $uri += "/?fields=*"
        }
        $headers = @{"Authorization"="Bearer $accessToken"}
    }

    Process
    {
        Invoke-RestMethod -Method Get -Uri $uri -Headers $headers
    }
    End{}
}

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Yeah there's some vmware cmdlets that have -confirm be true by default, and to override that you need to specify -confirm:$false and so I thought I'd be clever and do that, but according to PSScriptAnalyzer setting switches to true by default is bad practice so I decided to stop overthinking it.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
$ContentType.GetType() will tell you what that object actually is. If you just "call" the variable in your shell or use write-host to print it to the console it will convert it to a string to display, but that error means it's probably not a string. It should have a .ToString() method if you really want to do string manipulation on it but you may be able to do more with the original object in its native type.

You can also pipe it into Get-Member ($ContentType | Get-Member) and it will show you type type as well as all properties and methods of the object.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Hard to say without having an object of that type to play around with in the console, but everything is an object, so if it's got a Value property, you can do "$ContentType.Value | get-member" and see what's there. And so on and so on, it's objects all the way down.

Also you wouldn't use () after Values, because Values is a property not a method. So GetType() is a method so it needs the parenthesis (because methods can potentially take parameters) but properties don't take parameters so you'd get a failure if you tried Values().

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
You should figure out how to get that unblocked, it's like saying "work blocks Ubuntu update servers how do I apt-get install anything"

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Warbird posted:

They also block Github and Chocolatey. I was explicitly hired to work on the latter and have to use a personal laptop/wayback machine to reference docs. So not gonna happen.

To be clear that's the stupidest loving thing in the world and if I were you I'd be looking for another job if they've hired you to do something explicitly and then explicitly prevented you from doing that thing. Because yeah as you've figured out there's no good way to reinvent packaging.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
This works for us, but looks like you're doing the same thing.
code:
Invoke-RestMethod -Method Get -Uri $uri -Headers @{"Authorization"="Bearer $accessToken"} -OutFile $outFilePath
https://github.com/umn-microsoft-automation/UMN-Google/blob/master/UMN-Google.psm1#L328

You maybe able to get more details about the error with this: https://github.com/umn-microsoft-automation/UMN-Common/blob/master/UMN-Common.psm1#L380

But if it's a 404 that sounds like a URL issue not an auth issue.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
From my formal computer science education where I learned C and Python etc, I've always hated doing for loops with ForEach-Object, though I warmed to the foreach ($i in $list) construct. But now more ammunition against ForEach-Object. You can't break out of a loop with the break statement the way you can in a foreach loop.

edit: Apparently return return will do it in ForEach-Object but I'm still very suspicious...

FISHMANPET fucked around with this message at 16:09 on Mar 25, 2019

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
My favorite part of coming across a ForEach-Object loop is when the first command in it is $_ = $usefulName

I get that there are performance benefits to using ForEach-Object vs foreach, but that only matters when your array is so huge that loading it into memory would be burdensome. For my 80 element array a foreach loop should be just fine.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
In this particular case, what's actually returned doesn't matter as this isn't something that's passing into a pipeline, and continuing the loop is actually what I want (I think I'm trying to emulate continue, not break upon further reflection).

Also your demonstration is way nicer than what I puked into a single line.
code:
1..10 | ForEach-Object {$_; if ($_ -eq 5) {write-host "break";return;write-host "whoops"}}; write-host "i got here"
In my case I want to see "break" written out but not "whoops" and I still want to see all numbers 1 through 10, plus the final execution at the end.

FISHMANPET fucked around with this message at 16:31 on Mar 25, 2019

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Yeah every time I use break or continue I have to spend 5 minutes re-learning what each of them mean.

I put break in my production code that's using a ForEach-Object loop and it didn't do what I wanted, so I tried to solve the problem of "replicate break in ForEach-Object" because I assumed past me had used the correct keyword but I should know better than to trust that jerk.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I'm writing an Azure runbook that's going to validate a spreadsheet of customer data to make sure the data meets various conditions (make sure every email address is actually an email address, make sure the data matches up with our inventory, etc). It's a series of for each loops that loop through testing every row for whatever the condition is. I'd really like to pesterize this but I can't figure out any way to do it. I can break each test into a function, and I can write a pester test against that function, but I have no idea what the "best" way to actually run the tests is. The functions aren't exported, they only exist within the script. I don't want to put them in a module or something like that, because they serve no purpose outside of this runbook. I don't really want to have the functions external to the runbook in anyway because I don't want to have extra runbooks hanging out that only exist to be a source of functions.

It seems like there's no way for Pester to extract the functions from a script to test them without also executing the rest of the script, so the question becomes what's the best hacky work around.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

New Yorp New Yorp posted:

The corrected answer is "module".


FISHMANPET posted:

I don't want to put them in a module or something like that, because they serve no purpose outside of this runbook.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Doesn't making a highly specific module just for a single script go against the principal of writing functions to be as modular as possible?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
That's powershell telling you to use VS Code

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

PierreTheMime posted:

Is there a better way of parsing AWS S3 metadata values other than using the s3api and playing with the JSON? I wrote this as it works, but it's a bit clunky:

PowerShell code:
$Objects = Get-S3Object -BucketName <INSERTBUCKETHERE>
ForEach($Object in $Objects) {
	If ($Object.Key -Match '.+?/$') {
		Write-Host "Object: $($Object.Key)"
		(aws s3api head-object --bucket $Object.BucketName --key $Object.Key | Out-String | ConvertFrom-Json).metadata | Get-Member -Type NoteProperty | ForEach-Object { Write-Host ("   {0,-30}{1,-45}" -f "Key: $($_.Name)","Value: $($_.Definition.Split("=")[1])") }
	}
}
Edit: The above is specifically to get metadata from folder keys only, as I’m using that as a configuration space for various functions.

New lines and intermediary variables are free

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I did contribute about 4 or 5 lines so technically yes.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Google "umn-Google powershell" and you'll find our module that includes oauth stuff, either authenticating as you or as a service account token. It's only in a github branch right now, but something like gshit drop down (I made a typo in the branch name and never fixed it!) includes code that will do a lot of the work for you to create it, you still have to create the "application" or whatever in the Google developer console, but once you've got your client secret and app id and setup your redirect uri properly, it'll do the rest of the work.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Djimi posted:

code:
$items=(mydir.Items.Restrict($sFilter) | sort ReceivedTime -descending)
$count=$items.count
Write-Host "Count is " $count

The other answers are probably where you should go with this, but something that trips me up sometimes is that when there's only one object it won't be be in a container, and so it may not have a count property. If you're curious about making this code work, I'd see what happens when there's only one email returned by mydir.items.Restrict($sFilter) and see if it has a count property or not.

FISHMANPET fucked around with this message at 06:46 on Aug 16, 2019

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
You can also just force it into an array:
code:
$items=@(mydir.Items.Restrict($sFilter) | sort ReceivedTime -descending)
[code]
In this case you've got an extra set of parenthesis around everything, so if you just add the amersand it'll cast it into an array.
[code]
$items = @("one")
That's super contrived, especially because some of the builtin types properly do get a Count attribute even when there's a single item.
code:
"one".count
1.count
Those will both return a count of 1 like you'd expect.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Rocko Bonaparte posted:

code:
while((Test-Path C:\vagrant\trampoline\in) -eq $false -and $retries -lt $trampoline_retries) {

I agree with everything Toast Museum said but I'd wanna modify this test as well. I'm not 100% sure what the order of operations for evaluating boolean conditions is and I've found a lot of time where ambiguity in definition leads to strange results. I'd also avoid checking if something equals false, so I'd write this:
code:
while (((-not (test-path "C:\vagrant\trampoline\in")) -and ($retries -lt $trampoline_retries))
It's more parens but it's explicity about what you're checking for. You're checking that test-path returns False, AND you're checking that retries is less than trampoline_retries. Both your way and my way seem to return the same value when I test locally, but it's still something I'd call out on a code review. Also put it in quotes just to be safe, even though it doesn't actually have any spaces in it. But it's really strange that it sometimes works and sometimes doesn't.

Since this is running through some kind of automation, I'd take a close look at permissions. Make sure whatever user this script will run as can read that file. That's usually one of my troubleshooting steps if it works for me, is test and debug against as similar an environment as the real failure as possible.

Secx posted:

code:
$file = resolve-path("\\fileserver\NRInproMenuEN.xml")
Calling the command like this works, but not the way I suspect you think it does. Cmdlets aren't called like functions in other languages, you're lucky here that the interpreter is putting a space between "resolve-path" and your parenthesis so it just thinks it's wrapping that string into a grouping, but you don't need the parenthesis and it's a bad habit you'll want to get out of. Now there's also stuff like [System.IO.Directory]::Exists("C:\testpath") where you do need the parenthesis, because you're passing stuff into a method of the class [System.IO.Directory], or $menu.SelectNodes('//mitem') where you're calling a method on the $menu object.
There's also a copy/pasted back double quote from Word or some blog that auto formats, the interpreter will understand it but it'd be better to ensure it's a standard ASCII double quote.

Also a few tricks for working with strings and properties.
Write-Host $root$path.path won't work, it will print out the string value of $root and then the string representation of the entire path object and then literally ".path". This is a way to use properties in strings:
code:
Write-Host "$root$($path.path)"
You can do pretty much whatever you want inside that $(). You can look at properties of objects, array indexes, even entire new commands:
code:
Write-Host "Path is $(join-path $root $path.path)"
Though in your case sticking with things like join-path is good because it means you don't need to worry about which way the slashes go (Powershell is cross platform you know!) is another good habit to get into.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Jethro posted:

Right, FISHMANPET was talking about ambiguity in terms of how Powershell interprets a statement versus the ways a person could interpret it.

Yes thank you, this is what I meant by ambiguity, explained better than I could probably do myself.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

The Fool posted:

I'm building a PowerShell wrapper for an API and am thinking about releasing it open source as a module. Does anyone have any links to best practices/standards for doing this so I don't embarrass myself?

I've tried to find things but haven't had much luck. We've written a number of modules like that and released them as open source: https://github.com/umn-microsoft-automation.
UMN-ActiveDirectory is one that I think we've organized the files the best and used AzureDevops for testing and publishing. UMN-Google and UMN-VMWareRA are used a ton in our production workloads, but I'm really not happy with how they do error handling (they don't do any at all) so I'm not sure I'd model any code you write after that. But like I said I haven't found good examples of big modules that are just REST API wrappers. Most of the big wrapper modules I've seen are wrapping .net Libraries rather than using the REST API directly.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
It is kind of shocking how much it sucks managing PowerShell modules, and especially how instead of one big obvious sucky part, it feels like death by a thousand cuts where every little bit sucks just a little bit.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
You've got the case where ManagedBy (assume that's what you mean by Owner?) has multiple values, and in that case for each user you'd want them to have their own row, so the end result of a group with 2 owners would be 2 rows with identical data except for ManagedBy?

This could be done on the pipeline but I think it needs nested for loops and that gets ugly fast.
code:
$groups = Get-Group -Identity * -ResultSize Unlimited | select DisplayName,SamAccountName,WindowsEmailAddress,ManagedBy
foreach ($group in $groups) {
    foreach ($owner in $group.ManagedBy) {
        [PSCustomObject]@{
            DisplayName = $group.DisplayName
            SamAccountName = $group.SamAccountName
            WindowsEmailAddress = $group.WindowsEmailAddress
            ManagedBy = $owner
        } | Export-Csv -Path "C:\Users\Documents\PSReporting\AllGroupsManagedBy.csv" -Append
    }
}
Totally written without running. I don't think a calculated property will work because essentially you want one item to come in from the pipeline and for 2 objects to come out, and I don't think calculated properties can do that?
E: There's no problem putting 4000 objects into a variable, especially if you're only selecting a few properties. I've got a regular job that runs where one of the variables takes up 12GB of memory, and other than needing to make sure the system has enough memory there's no problems.

FISHMANPET fucked around with this message at 22:52 on Feb 10, 2020

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Hmm, that doesn't seem very useful for doing anything but handing to humans to read (and I don't think calculated properties can do a dynamic number of properties anyway) so Excel may be your best bet.

If you knew what the max number of owners you would have, you could do something like @{label="Owner 1";expression={$_.ManagedBy[0]}} and so on for the max number...

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
This is very bad and I feel bad for even writing it, and I'm not sure if it will actually work, but:
code:
Get-Group -Identity * -ResultSize Unlimited |ForEach-Object {$owners = foreach ($owner in 1..$($_.ManagedBy.Count)) {@{label="Owner $owner";expression={$_.ManagedBy[$owner-1]}}}} | select (@("DisplayName","SamAccountName","WindowsEmailAddress") + $owners) | Export-Csv -Path "C:\Users\Documents\PSReporting\AllGroupsManagedBy.csv"
In theory $owners will an array of calculated property objects Owner 1, Owner 2, etc etc. For every group it will set it to be the number of owners in that group, then select the properties you wanted plus this dynamically calculated list of calculated properties.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
The powershell gallery isn't even system wide, it's just setup to be added to every profile (it can even be removed from your profile). There's a verification step when adding a repository, along with a flag to skip verification, does that save any time?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Antigravitas posted:

I can't find any flag like that. I can specify the provider, but hardcoding "NuGet" doesn't speed it up at all.

Unrelated, while digging I found this file: "C:\Windows\System32\WindowsPowerShell\v1.0\Examples\profile.ps1"

Here's the full file content.

code:
#  Copyright (c) Microsoft Corporation.  All rights reserved.
#  
# THIS SAMPLE CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND,
# WHETHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.
# IF THIS CODE AND INFORMATION IS MODIFIED, THE ENTIRE RISK OF USE OR RESULTS IN
# CONNECTION WITH THE USE OF THIS CODE AND INFORMATION REMAINS WITH THE USER.
Amazing sample. You fucks.

OK I was thinking of Register-PackageSource which has a SkipValidate flag. Somehow Package Sources are related to PS Repositories in ways I don't understand, but I think behind the scenes adding a PS Repository will first register a Package Source. But Pile of Garbage did the deep digging and I don't think that would save you anything.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I don't have any specific course recommendations, but udemy.com is an interesting resource. It's pre-recorded lectures and some activities, but you can usually find a sale where you'll get a course for under $15. It's also some kind of crowd sourced (I'm not exactly sure how they "vet" their instructors") which is partly why I can't recommend any particular course since I haven't gone through any powershell ones. I did a course on docker/kubernetes which was good, but that was on the strength of the instructor. So that's why I can't recommend any specific courses because I don't know how good those instructors are. And if you like classroom instruction because of the ability to interact with classmates and the instructor in real time, you won't get that from something like udemy.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
The key here I think is what "large" means in this context. From experience it's not really thousands, I don't know about tens of thousands or hundreds of thousands or millions of objects.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Man, you are getting the content of that file way too many times.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Crosby B. Alfred posted:

I'm still getting weird errors with this but the next script functions as expected or there's something up with my workstation.

New-Variable : Cannot create variable OutputArray4172 because variable capacity 4096 is exceeded for this scope.


I'm referencing a freaking string named $outputarray7 :psyduck:

Is it not possible to reference dynamic variables in PowerShell in way I'm attempting to do here? I was under the impression that it possible to merely return something like $OutputArray1, $OutputArray2, $OutputArray3, etc.


My intent here was to reference one of my earlier dynamically created variables and populate it with the contents of $InputArray. This answers my earlier question, I could just use the ugly syntax of${OutputArray[$count]} on line 14 but what's weird is that I'm getting the same error messages as Zaepho's script but the one you created runs flawlessly.

Performance or best practice standards aside, this should still work... right?

code:
# Creating a new array
$InputArray = Get-Eventlog -LogName Application
# Defining the chunk size
$chunkSize = 3
# Defining parts
$parts = [math]::Ceiling($inputArray.Length / $chunkSize)
# Splitting the array to chunks of the same size
for($count=1; $count -le $chunksize; $count++)
{
    New-Variable -Name "OutputArray[$count]" -Value @()
    for($i=0; $i -le $parts; $i++){
        $start = $i*$chunkSize
        $end = (($i+1)*$chunkSize)-1
        ${OutputArray[$count]} += ,@($InputArray[$start..$end])
    }
}
code:
Cannot create variable parts because variable capacity 4096 is exceeded for this scope.
Anyhow, thanks for the help. I'm not familiar with the last two examples but I am going reverse engineer or study these tonight. :haw:

Edit - I had the wrong variable name earlier, I'm no longer getting the variable capacity error but none of my dynamic variables are being populated.

OK, I think there's some things you're missing about variables.
One thing you may have figured out now but to be clear, in PowerShell, something like $variable indicates you're referencing a variable with the name variable. The $ isn't part of the variable name, it just indicates to the interpreter that you're referencing a variable.
Also, something to know about variables in strings. When you put a string in single quotes ('my great variable $var') Powershell does no interpolation, the value will literally be 'my great variable $var. If you put the same string in double quotes, it will interpolate the variable, so if $var has a value of foo then "my great variable $var" will be my great variable foo
So others have mentioned but in your latest example let's look at this line:
code:
New-Variable -Name "OutputArray[$count]" -Value @()
I think what you're intending is that you want to reference the $count'th element in an array named $OutputArray but using New/Get/Set-Variable is going to look for a variable literally of that name, so you're actually creating a variables named $OutputArray[1], $OutputArray[2], etc. So I suspect your variables are getting populated, but then you're trying to reference that variable by saying $tempvar = $OutputArray[1] and that's looking for the element in position 1 (by the way, array indexing starts at 0 in powershell) in a variable that doesn't exist called $OutputArray (because you never created $OutputArray, you created $OutputArray[1], $OutputArray[2], etc) and you get a null reference error. You can actually just run the command Get-Variable with no parameters and it will show you all the variables you've created and you'll probably see all these individual $OutputArrays. And that's why in your example you've had to wrap your variable reference in curly braces, they're telling the interpreter to not do any interpreting with those brackets, just take the string exactly as is between the brackets. That also means, it's not going to even interpret $count so you're setting a variable over and over named literally OutputArray[$count] (very rude of Powershell to even allow you to put some of these characters in variable names).

Because of the your code is written, it runs, but it just keeps stuffing all the values into OutputArray[$count] rather than OutputArray[1], OutputArray[2], etc

I felt generous, and also to actually understand what you were trying to do, I rewrote this in a couple ways. First, actually creating an array of arrays. No need mess around with dynamically defining variable names, though you're treating the array in a very non-powershell way so this is kind of weird, having to pre-initialize the output array so that you can actually insert into it.
code:
$outputarray = [Object[]]::new($chunksize+1)
for($count=1; $count -le $chunksize; $count++)
{
    $OutputArray[$count] = @()
    for($i=0; $i -le $parts; $i++){
        $start = $i*$chunkSize
        $end = (($i+1)*$chunkSize)-1
        $OutputArray[$count] += ,@($InputArray[$start..$end])
    }
}
This will produce a single variable OutputArray, which is an array of 3 elements, and each element is an array with $parts number of events.

If you're truly interested in Dynamic variables, well here you go. This is very messed up because what you're trying to do is again... very messed up
code:
for($count=1; $count -le $chunksize; $count++)
{
    New-Variable -Name "OutputArray$count" -Value @()
    #$OutputArray[$count] = @()
    for($i=0; $i -le $parts; $i++){
        $start = $i*$chunkSize
        $end = (($i+1)*$chunkSize)-1
        Set-Variable -Name "OutputArray$count" -Value ((Get-Variable -Name "OutputArray$count").Value += ,@($InputArray[$start..$end]))
    }
}
That will produce 3 new variables, OutputArray1, OutputArray2, and OutputArray3, each variable having an array with $parts number of events.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Are your clients connecting directly to the printer's IP or connecting to a print queue on a server? I think you've got a process problem, not a technology the problem. There's no standard protocol of "printer names", you can go into the properties of a printer and just set the "name" to whatever. So my guess is that in your environment the printer name means something, and the fact that it can change means there's some kind of process for installing/updating printers on clients, and so you need to look into that business process to get an understanding of what a printer "name" means.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
If my "CM" you mean MEMCM (formerly but not actually SCCM) aka ConfigMgr, then you can also log in a format that CMTrace will understand, because odds are good you'll be comfortable with it, and you know it will be there if you're troubleshooting deployments.

Adbot
ADBOT LOVES YOU

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I'm not talking about returning data to MEMCM, I'm talking about writing logs in a formation that CMTrace will understand:
https://janikvonrotz.ch/2017/10/26/powershell-logging-in-cmtrace-format/
If you're not familiar with/aware of CMTrace then look for it in C:\Windows\CCM\CMtrace.exe and use it to open up some logs in C:\Windows\CCM\Logs and be amazed at its capability.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply