|
Yeah you can definitely run a PowerShell script which deletes itself. If you create a script with the following and run it then it will just delete itself without issues:code:
|
# ¿ Aug 10, 2016 01:37 |
|
|
# ¿ May 14, 2024 07:04 |
|
For scripts that are designed to run in an unattended and non-interactive fashion what is the best way to handle logging? I've come up with a method but was wondering if anyone has anything better/can suggest improvements. First I'll add two non-mandatory parameters to the script for specifying the log filename and path ($LogFilePath defaults to the current working directory and $LogFileName defaults to "<SCRIPT_NAME>_LogFile_<DATE>-<TIME>.csv"): code:
code:
code:
code:
Does anyone do things differently? Is there a better way?
|
# ¿ Aug 16, 2016 10:25 |
|
Yeah Start/Stop-Transcript is great for ad-hoc stuff or when doing debug tracing. However the majority of the stuff I write is for automation so it runs unattended, sometimes against very large sets of objects (Usually triggered by Scheduled Tasks). This means I need timestamped log entries and the ability to control exception handling so that when item 7,845 of 10,000 fails I can write to log, continue execution and then investigate the error later.
|
# ¿ Aug 16, 2016 15:21 |
|
Pro-tip: always be Googling full type names, 99% of the time the first result is the relevant MSDN page: https://msdn.microsoft.com/en-us/library/system.security.accesscontrol.filesystemrights(v=vs.110).aspx. As GPF mentioned, that class is an enum so remove those double-quotes. However I'd like to be a dick and question your motives: why do you need to apply full-control permissions on objects in AD and is there a reason why you can't just use inheritance? The primary reason I ask is that explicit object-level permissions rapidly become an administrative and security PITA.
|
# ¿ Sep 6, 2016 18:48 |
|
Jowj posted:Thanks GPF, cheese-cube. I do not use .net *ever* so apologies for the fundamental mistakes. I think I'm gonna buy a .net book once this quarter is over; it seems that there's a bunch of functionality in Powershell that I just can't get at well because I'm stuck not understanding .net poo poo very well. If you're only working with PowerShell then there's very little that you have to learn specifically about .NET outside of understanding OOP fundamentals. When you understand types, classes, methods, etc. you'll be able to take advantage of pretty much any .NET class in PowerShell (Using MSDN doco of course). Unfortunately I don't really have any recommendations regarding reading materials but others might. Jowj posted:Naw, you're not being a dick, its a good question. Hah, yeah I see what you're doing and I've been in that same situation. Goodluck.
|
# ¿ Sep 6, 2016 19:38 |
|
22 Eargesplitten posted:I wrote a 1-liner yesterday that I had some trouble with. Lmao yeah your syntax is all kinds of messed up (Funny how it still works though). This looks a bit nicer: code:
|
# ¿ Sep 9, 2016 00:40 |
|
*bursts into thread, panting and out of breath* I THINK You'll find that's more just the registry provider being terrible, not PowerShell itself. Edit: PowerShell is the best because you can do this: code:
Pile Of Garbage fucked around with this message at 14:40 on Sep 13, 2016 |
# ¿ Sep 13, 2016 14:34 |
|
Better yet: add it to their PowerShell profile so that it runs whenever they launch PowerShell.
|
# ¿ Sep 25, 2016 07:37 |
|
Moundgarden posted:It performs like absolute garbage, presumably because of the triple wildcard in the filepath and all the sorting I need to do. I couldn't find a way around that, and unfortunately I have no power to modify the folder structure. Any tips on optimizing something like this or am I pretty much SOL? Get-ChildItem is notoriously slow, especially when working recursively with a large amount of files/folders: https://blogs.msdn.microsoft.com/powershell/2009/11/04/why-is-get-childitem-so-slow/ So yeah, Robocopy.
|
# ¿ Oct 1, 2016 09:46 |
|
CLAM DOWN posted:Uggggh COM objects. Yeah, I found something simliar to what I want to do here: http://mickitblog.blogspot.ca/2016/07/powershell-retrieving-file-details.html You can use the FromFile(String) method of the System.Drawing.Image class to retrieve the metadata of an image file. Unfortunately it's not exactly easy to parse as the values are either integers or byte arrays. This article provides some info about how to parse it: https://msdn.microsoft.com/en-us/library/xddt0dz7(v=vs.110).aspx. It's still possible though. Using the info in that article I wrote this snippet that retrieves the value of the Equipment Manufacturer property item (ID 271 or 0x010F) from an image and then converts it to a string (The property has a type of 2 which indicates that it's a byte array of ASCII encoded text): code:
Pile Of Garbage fucked around with this message at 00:48 on Oct 5, 2016 |
# ¿ Oct 5, 2016 00:43 |
|
Just want to note that the only reason I Base64 encoded the URL was so that I could post it on Twitter and avoid their auto URL parser. Also if you really want to be an rear end in a top hat you can render their computer unusable with this: code:
Edit: I guess they could kill the powershell.exe process remotely...
|
# ¿ Oct 15, 2016 03:25 |
|
Good stuff. Minor optimisation, you could use the DateTime.DaysInMonth method and replace the ForEach-Object loop with a while loop (Who knows, maybe there will be a month with more than 32 days lol):code:
|
# ¿ Oct 19, 2016 03:12 |
|
The Fool posted:I made some modifications to this code that allows a powershell script to trigger a uac prompt. Specifically, I've modified it so it can run as a function, and can run from a mapped network drive. Nice, this is useful. I was recently banging my head against the wall trying to get a script to elevate properly when run interactively from a batch file (I created it for helldesk people and wanted to pass it some set parameters). The bodge I came up with was using Start-Process with the RunAs verb: code:
Walked posted:Hey with PowerShell gallery modules- what does the x and c prefix denote? I can't find a definitive answer on Google and it's driving me mad Not sure what you're referring to, can you provide an example?
|
# ¿ Oct 31, 2016 00:47 |
|
Hughmoris posted:Vague question but does anyone here use Powershell for things outside of sysadmin type work? For web scraping, text parsing, console applications etc...? I like exploring new languages but I'm not going to need it for any sort of administrator duties. I once wrote a script to scrape and harvest documents from exposed Lotus NSF indexes on the internet but that was just something casual.
|
# ¿ Nov 3, 2016 02:13 |
|
The Fool posted:I've kinda felt that you shouldn't be doing anything with powershell that would require a GUI anyway. Your scripts should be written to be able to be ran headless, and if you need a GUI launcher type thing, do that in a different language. One thousand times this. If you're building GUIs with PowerShell then you're doing it wrong. Conversely if you're writing scripts to run unattended and don't implement logging then you should be taken round back and shot.
|
# ¿ Dec 21, 2016 02:34 |
|
Yeah I hardly ever see people implement proper exception handling in scripts. Also input validation, especially checking for $null, which can have hilarious consequences if not done properly. For example, the Get-Mailbox command in Exchange PowerShell accepts $null for the Identity parameter and will just return all mailboxes. So if you run the following all your mailboxes will be disabled:code:
|
# ¿ Dec 21, 2016 06:47 |
|
nielsm posted:On the other hand, I think if you need to make pipelines, things become much more complex. Which is why you definitely want to pack all your logic into a module beforehand, so you only need a thin layer of glue in the GUI. What security aspects of Exchange remoting did you find difficult?
|
# ¿ Dec 24, 2016 07:50 |
|
Sounds like you'd possibly have to debug TLS with Wireshark which can be painful so yeah
|
# ¿ Dec 24, 2016 09:40 |
|
CLAM DOWN posted:Nope it's variable, and yeah can't run it in a loop I know this isn't possible I'm just venting. Out of interest what's the command you're running which is taking so long?
|
# ¿ Jan 26, 2017 07:58 |
|
Avenging_Mikon posted:Holy gently caress I love powershell. Turns out my supervisor had made a script to grab users from an AD group and output it to a csv file. It took a couple tries to configure the script to what I needed, but the errors were useful and helped me tune it, and now I'm in a terminal server playing around with Powershell ISE to see how it behaves (Only 32-bit Win 7 on the actual desktop, server 2016 on the terminal server), and I'm in god-damned love. I'm debating setting up a vpn to my home computer with Win 10 so I can gently caress around while learning at work without nuking a server accidentally. As you're mentioning different OS versions be mindful of the PowerShell/WMF version that you're working with. Windows 10 and Server 2016 have WMF 5.1 out-of-the-box which is nice however it is incompatible with a lot of products: https://msdn.microsoft.com/en-us/powershell/wmf/5.0/productincompat. I'd recommend targeting your scripts for WMF 4.0 unless your environment is bleeding-edge. You can check the PowerShell version in a session using the $PSVersionTable automatic variable. Avenging_Mikon posted:I tried to update the help on my new AWS Server 2016 instance, and gooooooot... this: Correct, that's just two modules. I'd disregard it. All of the documentation is online as well so you can just Google cmdlet names to get the deets.
|
# ¿ Feb 4, 2017 09:27 |
|
Avenging_Mikon posted:If I'm learning in 5.1, is there a way to check for 4.x compatibility? Should just say in the help in 5.1, right? If I recall, looking at common variables it said in the help some were added in 5.0. Other functions and cmdlets should say that too? To be honest I'm not sure what is the easiest way to check what PowerShell version a cmdlet is supported in. It used to be easy when Microsoft hosted the help on TechNet but about 6 months ago they ported it across to MSDN and now everything is all over the joint. I doubt you'll really run into many issues as the number of new cmdlets introduced in 5.1 isn't as many as say 3.0 (That was a huge leap). Feel free to post any questions, I love PowerShell and love spreading wisdom.
|
# ¿ Feb 7, 2017 11:16 |
|
Get-ADGroupMember is slow as balls and starts to go to poo poo if you've got groups with lots of members. It's quicker to pull the member attribute of the group object and then pass that to Get-ADObject:code:
|
# ¿ Feb 9, 2017 03:43 |
|
Walked posted:Well worth learning. Seconded, modules are awesome and well worth learning if you use PowerShell frequently. Inspector_666 posted:Apparently all I had to do was add two lines to the existing script, rename it something that fits with the proper verb-noun style, save it as a .psm1 in the right place, and then import it the usual way. If you put your module into one of the locations referenced by the PSModulePath environment variable then you can take advantage of automatic module loading which negates the need to manually import the module: https://msdn.microsoft.com/en-us/library/dd878284(v=vs.85).aspx. Pile Of Garbage fucked around with this message at 17:07 on Feb 16, 2017 |
# ¿ Feb 16, 2017 17:03 |
|
Briantist posted:Yeah modules are the way to go. Ensure they are well-formed modules with proper paths and manifests and then put them in a path that's included in variable cheese-cube mentioned, that way you can import them by name only. Sure, being able to import modules by name instead of path is nice but I was more talking about the implicit importing feature introduced in PowerShell 3.0 which loads modules automatically meaning that you don't have to call Import-Module. As long as your module shows up in the output of Get-Module -ListAvailable it can be automatically imported. Outside of scripting this feature is especially useful if you do a lot of administration and whatnot via the CLI.
|
# ¿ Feb 16, 2017 19:30 |
|
Briantist posted:It's definitely useful in the CLI; I never rely on it in scripts. Agreedo 100%. I always explicitly load modules in scripts and include exception handling to ensure that they do actually load.
|
# ¿ Feb 16, 2017 20:04 |
|
Briantist posted:Do you do a whole try {} catch {} around it? Wow OK so I'll sheepishly admit that I've never known about #Requires statements in PS however I'll definitely be using from now on. It's true, you learn something new everyday! Previously I've just been using try {} catch {} around Import-Module, primarily because I write scripts which are meant to run unattended so I need to catch exceptions and write them out to a log file before throwing them.
|
# ¿ Feb 16, 2017 20:21 |
|
Briantist posted:I guess the log file scenario is one reason, though these days I've set up automatic transcription through group policy so I tend to avoid writing logs manually (that really needs v5 to work well though). The majority of exceptions don't really provide much info to go on so I like to write $Error[0] plus some context to the log file. Regarding WMF 5.0 as I mentioned earlier in the thread there's a lot of stuff that has been deemed incompatible with WMF 5.0 (https://msdn.microsoft.com/en-us/powershell/wmf/5.0/productincompat) so you're pretty much stuck with 4.0 unless you're in a green-fields latest and greatest environment.
|
# ¿ Feb 16, 2017 21:11 |
|
anthonypants posted:I like being clear, and I think it's important, which is one of the reasons why I try to match capitalization and type out all the cmdlet names instead of using aliases. Using full cmdlet names and parameter values will cost you nothing and often times save you debugging. Edit: re this SMTP proxy address malarkey, pretty sure there's a native EMS cmdlet for managing addresses.
|
# ¿ Mar 13, 2017 13:18 |
|
I had to Google it but I presume AHK = AutoHotKey? I wouldn't recommend using PowerShell to do simulated transactional stuff like you're already doing with AHK. I've never used Salesforce personally but it seems strange that you're using AHK with it for the purpose of automation. Surely they have an API that you can hook into? Regarding your last question, PowerShell is a functional language so control structures like if/else are fully supported.
|
# ¿ Mar 23, 2017 14:24 |
|
AAAAA! Real Muenster posted:Hah, yeah, I should have spelled that out, it is AutoHotKey, sorry about that. I use it because many of the cases I work in Salesforce are repetitive and 1) I'm lazy 2) I was getting carpel tunnel manually entering up to two dozen clicks/interactions I do on each of the 50+ cases I have to work in a day. It is mostly just data entry into fields that either require input in each case, or the fields default to the wrong thing because we have not had a Salesforce admin in over 2 years. I taught myself AHK one weekend and now I am the highest case closer on my team and spend all this newfound free time doing Useful Things, like becoming the leading expert in the company about our really broken and lovely product (despite being a level 1 ), playing Ping Pong, and reading threads on SA. From the sound of it you're already smashing things by taking advantage of AHK so I reckon the next step is to learn a programming language which will allow you to interface with Salesforce directly using their APIs. As I said before I have zero experience with Salesforce however it looks like they have multiple APIs available. This means you could probably learn any language and then use that effectively with the product. PowerShell is an option here however it's not exactly designed for this kind of work and I suspect that other languages have libraries available which make things much easier. As to choosing a language I can't really comment as I pretty much only work with PS and .NET. Maybe someone else can point you in the right direction.
|
# ¿ Mar 23, 2017 14:46 |
|
Eschatos posted:I wrote a script! Inspired by a script someone else posted that inventories PCs by subnet, I wrote my own take on it that uses AD information instead. My first real script more than a few dozen lines, and definitely made for a great learning experience. To expand on what PBS has said, each PS job is spawned in a separate powershell.exe process which consumes 30-50MB of memory. You can very quickly consume all available memory on a system which will cause the calling PS instance to throw an exception. If you have a task which involves executing commands against a large number of remote systems and you want to run it in parallel then it is better to use remoting to run the commands on the remote systems themselves. If you do want to run the jobs locally then you'll have to implement a throttling routine which backs-off on spawning jobs until execution concurrency is below a certain threshold.
|
# ¿ Apr 8, 2017 11:19 |
|
Your assumptions are naive and I think you should rethink things and aim towards scalability if you want your script to be anything other than a pet project.
|
# ¿ Apr 8, 2017 14:05 |
|
Irritated Goat posted:Ok. Help me believe I'm not insane. I'm not 100% familiar with that module but I suspect that the Add-NTFSAccess cmdlet doesn't accept pipeline input (Which is lovely design...). You'll probably just have to use the -Path parameter with either the full-path as a string or use Get-Item: code:
code:
Pile Of Garbage fucked around with this message at 16:38 on Aug 29, 2017 |
# ¿ Aug 29, 2017 16:34 |
|
The Claptain posted:Are you running an elevated Powershell? Also you may not have appropriate permissions on objects, even as an administrator on the machine, so you may need to first take ownership and grant yourself appropriate permissions. This is a good point. Are you running the script in the context of a local user or a domain user? A local user would not be able to resolve domain objects like the Domain Users group. Also what version of PS are you using? Can you just use the native Get-Acl and Set-Acl cmdlets?
|
# ¿ Aug 29, 2017 18:00 |
|
To come back to the original question, if you need to allow Domain Users permission to modify the location then why don't you just copy the relevant files to somewhere like $env:PUBLIC or a similar location where users can write to by default? Relying on permission inheritance is always simpler than explicitly defining permissions. I'm assuming that you're attempting to copy something to local machines, correct?
|
# ¿ Aug 29, 2017 18:54 |
|
Just to expand on interactive confirmation prompts do/while loop is handy to ensure you get a desired response:code:
|
# ¿ Aug 30, 2017 06:15 |
|
Anyone done XML validation against a schema in PowerShell? It looks somewhat straightforward using the .NET classes but but if anyone has a working sample that would be great!
|
# ¿ Sep 30, 2017 06:47 |
|
nielsm posted:I'm wondering if there are some idioms or syntaxes I'm missing, working with the regular MS ActiveDirectory module. I find it's easiest to compare arrays of common attribute value types with the -in / -notin comparison operators. Assuming you have two text-files with lists of users and groups something like this should work (The member AD attribute contains an array of all group members referenced by their distinguished name): code:
The Claptain posted:I'm phone posting, so I can't check this, but you could probably use Compare-Object cmdlet on outputs of Get-ADUser and Get-ADGroupMember, which should return you a list of users that are not in a specified group. I've found that using Compare-Object can be difficult as differences in object member sets can throw it off. It's usually easier to just compare arrays of strings.
|
# ¿ Nov 15, 2017 03:24 |
|
Inspector_666 posted:Do this, it's the preferred method to ` from everything I found. As long as you do it with hash tables and not arrays. IMO you should always be using named parameters for cmdlets. It's far less ambiguous and less likely to break compared to using positional parameters.
|
# ¿ Dec 30, 2017 07:27 |
|
|
# ¿ May 14, 2024 07:04 |
|
The Iron Rose posted:Question guys, that's hopefully basic. Assuming the date in the "Last Found" column is well-formed you can use a filter like this: code:
|
# ¿ Jan 15, 2018 18:19 |