Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Pile Of Garbage
May 28, 2007



Yeah you can definitely run a PowerShell script which deletes itself. If you create a script with the following and run it then it will just delete itself without issues:

code:
Remove-Item -Path $MyInvocation.InvocationName

Adbot
ADBOT LOVES YOU

Pile Of Garbage
May 28, 2007



For scripts that are designed to run in an unattended and non-interactive fashion what is the best way to handle logging? I've come up with a method but was wondering if anyone has anything better/can suggest improvements.

First I'll add two non-mandatory parameters to the script for specifying the log filename and path ($LogFilePath defaults to the current working directory and $LogFileName defaults to "<SCRIPT_NAME>_LogFile_<DATE>-<TIME>.csv"):

code:
Param(
    [Parameter()]
    [String]$LogFilePath = $PWD.Path,

    [Parameter()]
    [String]$LogFileName = "$($MyInvocation.MyCommand)_LogFile_$((Get-Date).ToString('yyMMdd-HHmm')).csv"
)
Then I'll create the log file and write the column headers (Sometimes I'll wrap New-Item in a try/catch block just in-case):

code:
$LogFile = New-Item -Name $LogFileName -Path $LogFilePath -ItemType File -Force
Out-File -InputObject '"Date","Time","Level","Message"' -FilePath $LogFile -Encoding UTF8 -Append
Last of all I add a script method to the object that can be used for writing an entry to the log file:

code:
Add-Member -InputObject $LogFile -MemberType ScriptMethod -Name 'WriteLogFile' -Value {
    Param(
        [Parameter(Mandatory=$true)]
        [String]$Message,

        [Parameter()]
        [ValidateSet('Info', 'Warning', 'Error', 'Critical')]
        [String]$Level = 'Info'
    )

    $Date = Get-Date
    Out-File -InputObject ("`"$($Date.ToString('yyyy-MM-dd'))`",`"$($Date.ToString('HH:mm:ss.fff'))`",`"$($Level.ToLower())`",`"$Message`"") -FilePath $this -Encoding UTF8 -Append
}
Then throughout the script I can easily write entries to the log by calling the method:

code:
$LogFile.WriteLogFile('Someone stole your trees', 'Error')
That is just for a standalone script, usually I'd have the routine in a module of common functions.

Does anyone do things differently? Is there a better way?

Pile Of Garbage
May 28, 2007



Yeah Start/Stop-Transcript is great for ad-hoc stuff or when doing debug tracing. However the majority of the stuff I write is for automation so it runs unattended, sometimes against very large sets of objects (Usually triggered by Scheduled Tasks). This means I need timestamped log entries and the ability to control exception handling so that when item 7,845 of 10,000 fails I can write to log, continue execution and then investigate the error later.

Pile Of Garbage
May 28, 2007



Pro-tip: always be Googling full type names, 99% of the time the first result is the relevant MSDN page: https://msdn.microsoft.com/en-us/library/system.security.accesscontrol.filesystemrights(v=vs.110).aspx. As GPF mentioned, that class is an enum so remove those double-quotes.

However I'd like to be a dick and question your motives: why do you need to apply full-control permissions on objects in AD and is there a reason why you can't just use inheritance? The primary reason I ask is that explicit object-level permissions rapidly become an administrative and security PITA.

Pile Of Garbage
May 28, 2007



Jowj posted:

Thanks GPF, cheese-cube. I do not use .net *ever* so apologies for the fundamental mistakes. I think I'm gonna buy a .net book once this quarter is over; it seems that there's a bunch of functionality in Powershell that I just can't get at well because I'm stuck not understanding .net poo poo very well.

If you're only working with PowerShell then there's very little that you have to learn specifically about .NET outside of understanding OOP fundamentals. When you understand types, classes, methods, etc. you'll be able to take advantage of pretty much any .NET class in PowerShell (Using MSDN doco of course).

Unfortunately I don't really have any recommendations regarding reading materials but others might.

Jowj posted:

Naw, you're not being a dick, its a good question.

For context, this is part of a set of scripts I'm making for DB cluster build automation. Security doesn't want to grant the DBAs/MSSQL account permission to create instance objects in this OU so each time a new cluster is built every instance has to be manually added and have the clusterobj associated with the instance granted fullcontrol. As to why we're not doing inheritance its because I don't have enough time to get approval to change our process and then implement the change before the project is due. Everything I've read makes it look so much easier if I have poo poo configured at the OU level instead of at the individual object level, just :\.

So, no business justification really, just timelines from management.

Hah, yeah I see what you're doing and I've been in that same situation. Goodluck.

Pile Of Garbage
May 28, 2007



22 Eargesplitten posted:

I wrote a 1-liner yesterday that I had some trouble with.

code:
gci /path . -include "*.auc" /force /recurse 
I also tried without specifying the path. It worked on a few folders, but I got an error saying access was denied on most of them. I ran it as an administrator, so I don't see why that should be. I even got it on some of my own user folders.

Lmao yeah your syntax is all kinds of messed up (Funny how it still works though). This looks a bit nicer:

code:
Get-ChildItem -Path '.\' -Include '*.auc' -Force -Recurse
However the access denied errors you are seeing is most likely due to it trying to traverse NTFS junction points, of which there are many in user profile folders (e.g. "C:\Users\Default\Application Data" and "C:\Users\Default\Start Menu", etc.). These junction points are set to hidden so if you run Get-ChildItem without the Force switch it won't try to traverse them. However if the files you are searching for are also hidden then you may just have to catch the exceptions.

Pile Of Garbage
May 28, 2007



*bursts into thread, panting and out of breath*

I THINK You'll find that's more just the registry provider being terrible, not PowerShell itself.

Edit: PowerShell is the best because you can do this:

code:
(New-Object Media.SoundPlayer([Text.Encoding]::UTF8.GetString([Convert]::FromBase64String('aHR0cDovL2JpdC5seS8xQnJaQ3Rr')))).PlayLooping()

Pile Of Garbage fucked around with this message at 14:40 on Sep 13, 2016

Pile Of Garbage
May 28, 2007



Better yet: add it to their PowerShell profile so that it runs whenever they launch PowerShell.

Pile Of Garbage
May 28, 2007



Moundgarden posted:

It performs like absolute garbage, presumably because of the triple wildcard in the filepath and all the sorting I need to do. I couldn't find a way around that, and unfortunately I have no power to modify the folder structure. Any tips on optimizing something like this or am I pretty much SOL?

Get-ChildItem is notoriously slow, especially when working recursively with a large amount of files/folders: https://blogs.msdn.microsoft.com/powershell/2009/11/04/why-is-get-childitem-so-slow/

So yeah, Robocopy.

Pile Of Garbage
May 28, 2007



CLAM DOWN posted:

Uggggh COM objects. Yeah, I found something simliar to what I want to do here: http://mickitblog.blogspot.ca/2016/07/powershell-retrieving-file-details.html

Ugggggggggggggh

You can use the FromFile(String) method of the System.Drawing.Image class to retrieve the metadata of an image file. Unfortunately it's not exactly easy to parse as the values are either integers or byte arrays. This article provides some info about how to parse it: https://msdn.microsoft.com/en-us/library/xddt0dz7(v=vs.110).aspx.

It's still possible though. Using the info in that article I wrote this snippet that retrieves the value of the Equipment Manufacturer property item (ID 271 or 0x010F) from an image and then converts it to a string (The property has a type of 2 which indicates that it's a byte array of ASCII encoded text):

code:
$EncodingObject = New-Object -TypeName System.Text.ASCIIEncoding
$Manufacturer = ([System.Drawing.Image]::FromFile('C:\Temp\farts.jpg')).PropertyItems | Where-Object -FilterScript { $_.Id -eq 271 }
$Encoding.GetString($Manufacturer.Value)
This article lists the IDs of the metadata property tags: https://msdn.microsoft.com/en-us/library/system.drawing.imaging.propertyitem.id(v=vs.110).aspx. Using the IDs and the property types you should be able to decode the majority of the metadata but yeah, kinda annoying.

Pile Of Garbage fucked around with this message at 00:48 on Oct 5, 2016

Pile Of Garbage
May 28, 2007



Just want to note that the only reason I Base64 encoded the URL was so that I could post it on Twitter and avoid their auto URL parser.

Also if you really want to be an rear end in a top hat you can render their computer unusable with this:

code:
do{iex $env:SystemRoot\System32\Bubbles.scr}while($true)
All it does is continuously launches the Bubbles screensaver. The kicker is that the screensaver captures all mouse/keyboard input when running so you can't even get Ctrl+Alt+Del in and will have to hard-reset. Should work on all versions of Windows but only tested on 7 and 10.

Edit: I guess they could kill the powershell.exe process remotely...

Pile Of Garbage
May 28, 2007



Good stuff. Minor optimisation, you could use the DateTime.DaysInMonth method and replace the ForEach-Object loop with a while loop (Who knows, maybe there will be a month with more than 32 days lol):

code:
$DayCounter = 0
do {
    $evaldate = (Get-Date -Year $Year -Month $Month -Day 1).AddDays($DayCounter)
    if ($evaldate.Month -eq $Month) {
        if ($evaldate.DayOfWeek -eq $Day) {
            $alldays += $evaldate.Day
        }
    }
    $DayCounter++
} while ($DayCounter -le [System.DateTime]::DaysInMonth($Year, $Month))
Speaking of scripts, I was trying to use this one from Microsoft recently however I found that it has no proxy server support: https://www.microsoft.com/en-us/download/details.aspx?id=46381. So I went and modified it so that a proxy server can be specified: http://pastebin.com/1JJXj7SP. Lines 64-66 and 493-505 have been added and line 176 has been modified. It uses the same strings.psd1 file and works like a charm! Oh, except for the fact that the modifications invalidate the signature so if your execution policy is set to RemoteSigned it probably won't run...

Pile Of Garbage
May 28, 2007



The Fool posted:

I made some modifications to this code that allows a powershell script to trigger a uac prompt. Specifically, I've modified it so it can run as a function, and can run from a mapped network drive.

Call this function in a script that needs admin privileges, it will trigger a uac prompt, and the re-launch the script with the credentials you entered in the UAC prompt.

I don't know how useful this is to other people, but I'm getting some mileage out of it.

Nice, this is useful. I was recently banging my head against the wall trying to get a script to elevate properly when run interactively from a batch file (I created it for helldesk people and wanted to pass it some set parameters). The bodge I came up with was using Start-Process with the RunAs verb:

code:
%SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe -Command "& {
 Start-Process -FilePath $env:SystemRoot\system32\WindowsPowerShell\v1.0\powershell.exe -ArgumentList '-File \\example.com\dfs_root\Script\Script.ps1 -Action foo -OtherAction bar' -Verb RunAs
}"

Walked posted:

Hey with PowerShell gallery modules- what does the x and c prefix denote? I can't find a definitive answer on Google and it's driving me mad

Not sure what you're referring to, can you provide an example?

Pile Of Garbage
May 28, 2007



Hughmoris posted:

Vague question but does anyone here use Powershell for things outside of sysadmin type work? For web scraping, text parsing, console applications etc...? I like exploring new languages but I'm not going to need it for any sort of administrator duties.

I once wrote a script to scrape and harvest documents from exposed Lotus NSF indexes on the internet but that was just something casual.

Pile Of Garbage
May 28, 2007



The Fool posted:

I've kinda felt that you shouldn't be doing anything with powershell that would require a GUI anyway. Your scripts should be written to be able to be ran headless, and if you need a GUI launcher type thing, do that in a different language.

One thousand times this. If you're building GUIs with PowerShell then you're doing it wrong. Conversely if you're writing scripts to run unattended and don't implement logging then you should be taken round back and shot.

Pile Of Garbage
May 28, 2007



Yeah I hardly ever see people implement proper exception handling in scripts. Also input validation, especially checking for $null, which can have hilarious consequences if not done properly. For example, the Get-Mailbox command in Exchange PowerShell accepts $null for the Identity parameter and will just return all mailboxes. So if you run the following all your mailboxes will be disabled:

code:
$null | Get-Mailbox | Disable-Mailbox -Confirm:$false
You can see how this can backfire in a script if your logic is off.

Pile Of Garbage
May 28, 2007



nielsm posted:

On the other hand, I think if you need to make pipelines, things become much more complex. Which is why you definitely want to pack all your logic into a module beforehand, so you only need a thin layer of glue in the GUI.

You can also set up "runspaces" in the programming model to do remoting directly. Unfortunately, it seems the security aspects of remoting can be difficult, at least if you want to target Exchange. I experimented with setting that up once, and never found something that worked.

What security aspects of Exchange remoting did you find difficult?

Pile Of Garbage
May 28, 2007



Sounds like you'd possibly have to debug TLS with Wireshark which can be painful so yeah

Pile Of Garbage
May 28, 2007



CLAM DOWN posted:

Nope it's variable, and yeah can't run it in a loop :( I know this isn't possible I'm just venting.

Out of interest what's the command you're running which is taking so long?

Pile Of Garbage
May 28, 2007



Avenging_Mikon posted:

Holy gently caress I love powershell. Turns out my supervisor had made a script to grab users from an AD group and output it to a csv file. It took a couple tries to configure the script to what I needed, but the errors were useful and helped me tune it, and now I'm in a terminal server playing around with Powershell ISE to see how it behaves (Only 32-bit Win 7 on the actual desktop, server 2016 on the terminal server), and I'm in god-damned love. I'm debating setting up a vpn to my home computer with Win 10 so I can gently caress around while learning at work without nuking a server accidentally.

As you're mentioning different OS versions be mindful of the PowerShell/WMF version that you're working with. Windows 10 and Server 2016 have WMF 5.1 out-of-the-box which is nice however it is incompatible with a lot of products: https://msdn.microsoft.com/en-us/powershell/wmf/5.0/productincompat. I'd recommend targeting your scripts for WMF 4.0 unless your environment is bleeding-edge. You can check the PowerShell version in a session using the $PSVersionTable automatic variable.

Avenging_Mikon posted:

I tried to update the help on my new AWS Server 2016 instance, and gooooooot... this:
code:
Update-Help : Failed to update Help for the module(s) 'BitsTransfer, Whea' with UI culture(s) 
{en-US} : The value of the HelpInfoUri key in the module manifest must resolve to a container or 
root URL on a website where the help files are stored. The HelpInfoUri 
'https://technet.microsoft.com/en-us/library/dd819413.aspx' does not resolve to a container.
At line:1 char:1
+ Update-Help
+ ~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (:) [Update-Help], Exception
    + FullyQualifiedErrorId : InvalidHelpInfoUri,Microsoft.PowerShell.Commands.UpdateHelpCommand
That's just two modules that didn't get updated help, right? Anything I'm going to need to know about those any time soon? Or can I safely disregard this?

Correct, that's just two modules. I'd disregard it. All of the documentation is online as well so you can just Google cmdlet names to get the deets.

Pile Of Garbage
May 28, 2007



Avenging_Mikon posted:

If I'm learning in 5.1, is there a way to check for 4.x compatibility? Should just say in the help in 5.1, right? If I recall, looking at common variables it said in the help some were added in 5.0. Other functions and cmdlets should say that too?

To be honest I'm not sure what is the easiest way to check what PowerShell version a cmdlet is supported in. It used to be easy when Microsoft hosted the help on TechNet but about 6 months ago they ported it across to MSDN and now everything is all over the joint. I doubt you'll really run into many issues as the number of new cmdlets introduced in 5.1 isn't as many as say 3.0 (That was a huge leap).

Feel free to post any questions, I love PowerShell and love spreading wisdom.

Pile Of Garbage
May 28, 2007



Get-ADGroupMember is slow as balls and starts to go to poo poo if you've got groups with lots of members. It's quicker to pull the member attribute of the group object and then pass that to Get-ADObject:

code:
(Get-ADGroup -Identity 'Users' -Properties Member).Member | Get-ADObject
That is usually 3-5x faster than Get-ADGroupMember (You can check for yourself with Measure-Command). With some logic and a loop you can do it recursively although that may end up being costlier because Get-ADGroupMember returns unique objects so using the above you'd probably have to pipe it to Select-Object -Unique which is costly.

Pile Of Garbage
May 28, 2007



Walked posted:

Well worth learning.

Seconded, modules are awesome and well worth learning if you use PowerShell frequently.

Inspector_666 posted:

Apparently all I had to do was add two lines to the existing script, rename it something that fits with the proper verb-noun style, save it as a .psm1 in the right place, and then import it the usual way.

I know there's the whole manifest file and I need to actually write proper help documentation for stuff, but that was a lot easier than I was expecting.

If you put your module into one of the locations referenced by the PSModulePath environment variable then you can take advantage of automatic module loading which negates the need to manually import the module: https://msdn.microsoft.com/en-us/library/dd878284(v=vs.85).aspx.

Pile Of Garbage fucked around with this message at 17:07 on Feb 16, 2017

Pile Of Garbage
May 28, 2007



Briantist posted:

Yeah modules are the way to go. Ensure they are well-formed modules with proper paths and manifests and then put them in a path that's included in variable cheese-cube mentioned, that way you can import them by name only.

Sure, being able to import modules by name instead of path is nice but I was more talking about the implicit importing feature introduced in PowerShell 3.0 which loads modules automatically meaning that you don't have to call Import-Module. As long as your module shows up in the output of Get-Module -ListAvailable it can be automatically imported. Outside of scripting this feature is especially useful if you do a lot of administration and whatnot via the CLI.

Pile Of Garbage
May 28, 2007



Briantist posted:

It's definitely useful in the CLI; I never rely on it in scripts.

Agreedo 100%. I always explicitly load modules in scripts and include exception handling to ensure that they do actually load.

Pile Of Garbage
May 28, 2007



Briantist posted:

Do you do a whole try {} catch {} around it?

I either use #Requires -Module or Import-Module Module -ErrorAction Stop and leave it at that.

Wow OK so I'll sheepishly admit that I've never known about #Requires statements in PS however I'll definitely be using from now on. It's true, you learn something new everyday!

Previously I've just been using try {} catch {} around Import-Module, primarily because I write scripts which are meant to run unattended so I need to catch exceptions and write them out to a log file before throwing them.

Pile Of Garbage
May 28, 2007



Briantist posted:

I guess the log file scenario is one reason, though these days I've set up automatic transcription through group policy so I tend to avoid writing logs manually (that really needs v5 to work well though).

The majority of exceptions don't really provide much info to go on so I like to write $Error[0] plus some context to the log file.

Regarding WMF 5.0 as I mentioned earlier in the thread there's a lot of stuff that has been deemed incompatible with WMF 5.0 (https://msdn.microsoft.com/en-us/powershell/wmf/5.0/productincompat) so you're pretty much stuck with 4.0 unless you're in a green-fields latest and greatest environment.

Pile Of Garbage
May 28, 2007



anthonypants posted:

I like being clear, and I think it's important, which is one of the reasons why I try to match capitalization and type out all the cmdlet names instead of using aliases.

:same:

Using full cmdlet names and parameter values will cost you nothing and often times save you debugging.

Edit: re this SMTP proxy address malarkey, pretty sure there's a native EMS cmdlet for managing addresses.

Pile Of Garbage
May 28, 2007



I had to Google it but I presume AHK = AutoHotKey? I wouldn't recommend using PowerShell to do simulated transactional stuff like you're already doing with AHK. I've never used Salesforce personally but it seems strange that you're using AHK with it for the purpose of automation. Surely they have an API that you can hook into?

Regarding your last question, PowerShell is a functional language so control structures like if/else are fully supported.

Pile Of Garbage
May 28, 2007



AAAAA! Real Muenster posted:

Hah, yeah, I should have spelled that out, it is AutoHotKey, sorry about that. I use it because many of the cases I work in Salesforce are repetitive and 1) I'm lazy 2) I was getting carpel tunnel manually entering up to two dozen clicks/interactions I do on each of the 50+ cases I have to work in a day. It is mostly just data entry into fields that either require input in each case, or the fields default to the wrong thing because we have not had a Salesforce admin in over 2 years. I taught myself AHK one weekend and now I am the highest case closer on my team and spend all this newfound free time doing Useful Things, like becoming the leading expert in the company about our really broken and lovely product (despite being a level 1 :haw: ), playing Ping Pong, and reading threads on SA.

I dont have much tech background or training before I got this job (I dont have a college degree), so working with an API or more advanced scripts is beyond my knowledge at this point, but I am obviously trying to learn and do more.

From the sound of it you're already smashing things by taking advantage of AHK so I reckon the next step is to learn a programming language which will allow you to interface with Salesforce directly using their APIs. As I said before I have zero experience with Salesforce however it looks like they have multiple APIs available. This means you could probably learn any language and then use that effectively with the product. PowerShell is an option here however it's not exactly designed for this kind of work and I suspect that other languages have libraries available which make things much easier. As to choosing a language I can't really comment as I pretty much only work with PS and .NET. Maybe someone else can point you in the right direction.

Pile Of Garbage
May 28, 2007



Eschatos posted:

I wrote a script! Inspired by a script someone else posted that inventories PCs by subnet, I wrote my own take on it that uses AD information instead. My first real script more than a few dozen lines, and definitely made for a great learning experience.

Link

It works by taking a list of all computers from Active Directory, then filtering out all non-desktop OSes. It then iterates through the list, running a bunch of WMI queries on every computer and kludging the results together into one big array, which is then saved to a .csv on disk. The bit that I'm particularly happy with is now it can save a master .csv and compare new results with it, overwriting old information or failed queries with more up to date results. In the future I plan to improve it by having it query more potentially useful information from PCs, figure out how to autorun it daily from one of our servers, and run the queries as jobs so that it doesn't take two hours to scan everything.

To expand on what PBS has said, each PS job is spawned in a separate powershell.exe process which consumes 30-50MB of memory. You can very quickly consume all available memory on a system which will cause the calling PS instance to throw an exception.

If you have a task which involves executing commands against a large number of remote systems and you want to run it in parallel then it is better to use remoting to run the commands on the remote systems themselves.

If you do want to run the jobs locally then you'll have to implement a throttling routine which backs-off on spawning jobs until execution concurrency is below a certain threshold.

Pile Of Garbage
May 28, 2007



Your assumptions are naive and I think you should rethink things and aim towards scalability if you want your script to be anything other than a pet project.

Pile Of Garbage
May 28, 2007



Irritated Goat posted:

Ok. Help me believe I'm not insane.

I'm using the NTFSSecurity module to assign Domain Users permissions to a folder.

code:
Import-Module NTFSSecurity
New-Item -type directory -path <local location for app>
Copy-Item <network location of app> <local location of app>
Get-item .\App | Add-NTFSAccess -Account "Domain\Domain Users" -AccessRights Modify
When it hits the Add-NTFSAccess portion, I get an access denied. I'm local admin on this PC so I'm confused as to why I'd even run into this issue. :confused:

The script basically just
    creates a folder
    copies the network install of it to the newly created folder
    Gives domain\domain users modify access on it as the application auto-updates itself

I'm not 100% familiar with that module but I suspect that the Add-NTFSAccess cmdlet doesn't accept pipeline input (Which is lovely design...). You'll probably just have to use the -Path parameter with either the full-path as a string or use Get-Item:

code:
Add-NTFSAccess -Path (Get-item .\App).FullName -Account "Domain\Domain Users" -AccessRights Modify
Edit: Just noticed that you are referencing the path with the relative .\ notation. That will resolve from whatever $PWD is (Usually working directory) and it's best not to use relative paths in a script as it will be inconsistent. I'd go with something similar to this:

code:
Import-Module NTFSSecurity
$Folder = New-Item -type directory -path <local location for app>
Copy-Item <network location of app> $Folder.FullName
Add-NTFSAccess -Path $Folder.FullName -Account "Domain\Domain Users" -AccessRights Modify

Pile Of Garbage fucked around with this message at 16:38 on Aug 29, 2017

Pile Of Garbage
May 28, 2007



The Claptain posted:

Are you running an elevated Powershell? Also you may not have appropriate permissions on objects, even as an administrator on the machine, so you may need to first take ownership and grant yourself appropriate permissions.

This is a good point. Are you running the script in the context of a local user or a domain user? A local user would not be able to resolve domain objects like the Domain Users group.

Also what version of PS are you using? Can you just use the native Get-Acl and Set-Acl cmdlets?

Pile Of Garbage
May 28, 2007



To come back to the original question, if you need to allow Domain Users permission to modify the location then why don't you just copy the relevant files to somewhere like $env:PUBLIC or a similar location where users can write to by default? Relying on permission inheritance is always simpler than explicitly defining permissions. I'm assuming that you're attempting to copy something to local machines, correct?

Pile Of Garbage
May 28, 2007



Just to expand on interactive confirmation prompts do/while loop is handy to ensure you get a desired response:

code:
do { $Continue = Read-Host -Prompt "Are you sure you want to disable $userID? (Y/N)" } while ($Continue -ne 'y' -and $Continue -ne 'n')

Pile Of Garbage
May 28, 2007



Anyone done XML validation against a schema in PowerShell? It looks somewhat straightforward using the .NET classes but but if anyone has a working sample that would be great!

Pile Of Garbage
May 28, 2007



nielsm posted:

I'm wondering if there are some idioms or syntaxes I'm missing, working with the regular MS ActiveDirectory module.

Say I have a list of user names, and a list of group names. I want to know what users are not in each group, and I want to simply add the missing users. Using Add-ADGroupMember with -Members @("user1", "user2") fails if even a single of the users is already member of the group, and making a loop (ForEach-Object) over the users is awkward and seems backward. Is there a better way?

Is there a good way to present a table (matrix) of AD user objects (with MemberOf property fetched) with some select groups as columns, containing member true/false flags? One that doesn't involve a loop and Add-Member (or creating PSCustomObjects).

I find it's easiest to compare arrays of common attribute value types with the -in / -notin comparison operators. Assuming you have two text-files with lists of users and groups something like this should work (The member AD attribute contains an array of all group members referenced by their distinguished name):

code:
$Users = Get-Content -Path 'C:\Temp\UserList.txt' | Get-ADUser
$Groups = Get-Content -Path 'C:\Temp\GroupList.txt' | Get-ADGroup -Properties member

foreach ($Group in $Groups) {
    Add-ADGroupMember -Identity $Group -Members ($Users | ? { $_.DistinguishedName -notin $Group.Member })
}

The Claptain posted:

I'm phone posting, so I can't check this, but you could probably use Compare-Object cmdlet on outputs of Get-ADUser and Get-ADGroupMember, which should return you a list of users that are not in a specified group.

I've found that using Compare-Object can be difficult as differences in object member sets can throw it off. It's usually easier to just compare arrays of strings.

Pile Of Garbage
May 28, 2007



Inspector_666 posted:

Do this, it's the preferred method to ` from everything I found.

As long as you do it with hash tables and not arrays. IMO you should always be using named parameters for cmdlets. It's far less ambiguous and less likely to break compared to using positional parameters.

Adbot
ADBOT LOVES YOU

Pile Of Garbage
May 28, 2007



The Iron Rose posted:

Question guys, that's hopefully basic.

I have a script that grabs a CSV of all detected AV threats from the web, saves it to $output, sorts it, selects certain columns, than bundles those off into HTML and emails it.

There's a certain column in the CSV, which is called "Last Found." In that column, it simply lists the date that the threat was last found, in the following format: MM/DD/YYYY HH/MM. An example would be "1/12/2018 19:16"


What I want to do is compare that against the current date, and remove all rows where "Last Found" is over a week from the current date. I think the way I want to do this is something like what I have below:

code:
Get-Content $output | Where{$_ -notmatch $date} | Out-File $output
But for the life of me I can't figure out how to compare the current date against the date under "Last Found" and strip out rows older than 7 days as a result.


Once that's done, I just call the rest of my script like normal. Relevant bits below:

The relevant command:
code:
$csv = Import-Csv $output | sort "Last Found" | Select "File Name",DeviceName,"Last Found", "File Path" | ConvertTo-Html -head $a -Body @"
"@
One other thing to note - I don't know poo poo about powershell :v: and I'm mostly just learning by doing, well, things like this at work. Any pointers would be greatly appreciated.

Assuming the date in the "Last Found" column is well-formed you can use a filter like this:

code:
[System.DateTime]::Parse($_.'Last Found') -gt (Get-Date).AddDays(-7)
Also if you're pulling the data and then processing it in the same script you should just pass it around as an object instead of exporting and re-importing it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply