A popular request the PowerShell team has received is to use Secure Shell protocol and Shell session (aka SSH) to interoperate between Windows and Linux – both Linux connecting to and managing Windows via SSH and, vice versa, Windows connecting to and managing Linux via SSH. Thus, the combination of PowerShell and SSH will deliver a robust and secure solution to automate and to remotely manage Linux and Windows systems. http://blogs.msdn.com/b/looking_for...-shell-ssh.aspx That's pretty sweet, actually.
|
|
# ? Jun 2, 2015 20:11 |
|
|
# ? May 10, 2024 14:08 |
|
Hughmoris posted:You're the man! The solved my problem. Trying to fumble my way through automating some tasks at work and its amazing how easy PowerShell makes it. The curly braces in Where-Object are a scriptblock, so you can actually put any expressions in there that return or can be coerced to a boolean (true/false) value. You can even make it multiline. Anyway what I'm getting at is that what you learned there about using regex is not limited to use in Where-Object. -match is an operator, like -eq (or equals) and you can use it where you would do something like that, like in an if statement: code:
high six posted:So I just finished Powershell in a Month of Lunches a few weeks ago and I think I have the basics of PS down pretty well. Where would someone move next book-wise? For background, I am a helpdesk monkey with some occasional admin stuff that I do. ConfusedUs posted:A popular request the PowerShell team has received is to use Secure Shell protocol and Shell session (aka SSH) to interoperate between Windows and Linux – both Linux connecting to and managing Windows via SSH and, vice versa, Windows connecting to and managing Linux via SSH. Thus, the combination of PowerShell and SSH will deliver a robust and secure solution to automate and to remotely manage Linux and Windows systems.
|
# ? Jun 2, 2015 23:17 |
|
Hughmoris posted:You're the man! The solved my problem. Trying to fumble my way through automating some tasks at work and its amazing how easy PowerShell makes it. Just a heads up: remember that format- cmdlets are for formating and should only be used at the end of your command.
|
# ? Jun 3, 2015 01:41 |
|
Briantist posted:
Agreed. I didn't start getting better with powershell until I started using it for tasks at work. Oh hey, I need to get the amount of RAM installed on this list of 30 servers, how can I use Powershell to get that information. Over the last year I've built up a decent collection of starter scripts and one liners that I can modify for whatever I need. I mostly manage AD and users so most of my scripts are for those type of tasks.
|
# ? Jun 3, 2015 18:31 |
|
skipdogg posted:Agreed. I didn't start getting better with powershell until I started using it for tasks at work. The WIin32_OperatingSystem class in the Root\CIMv2 namespace has a TotalVisibleMemorySize property that looks like like a good source of data for your need. simply loop through each server something like the below pseudocode ad then do something clever/useful with the output of the WMI objects. code:
|
# ? Jun 3, 2015 19:51 |
|
Zaepho posted:The WIin32_OperatingSystem class in the Root\CIMv2 namespace has a TotalVisibleMemorySize property that looks like like a good source of data for your need.
|
# ? Jun 4, 2015 00:08 |
|
Briantist posted:I think that was a rhetorical question, like he was showing an example of a problem he had that he then solved with PowerShell. But I could definitely see someone saying "hey yeah, how would I do that?" so kudos! Not responding while instructing customers how to build a public cloud (aka public butt) is probably a better plan. In any case, it was a welcome diversion from watching cluster validations run.
|
# ? Jun 4, 2015 00:24 |
|
Zaepho posted:Not responding while instructing customers how to build a public cloud (aka public butt) is probably a better plan. In any case, it was a welcome diversion from watching cluster validations run. http://cloud-2-butt.tumblr.com/
|
# ? Jun 4, 2015 21:59 |
|
Over the last two years we started off with scripts to check if a service was running, and restart it if it was stopped, nowadays we're installing .net code directly to the GAC using enterprise services remotely across the network for some custom code products. The great part about powershell is if you keep it all in the same general area where everyone can look at it, you just build and build on top of eachother's work. PSM1's allow for a pretty impressive level of function documentation which you can pull up directly from the ISE and is fully searchable, which is really handy if you know someone wrote a function to do a complex task but can't remember what it's named. It also helps prevent you from writing potentially buggy functions, or troubleshooting something that someone else has already figured out. Microsoft lets you make ps modules, but we've found using functions directly from a psm1 file is a lot better.
|
# ? Jun 6, 2015 14:11 |
|
Hadlock posted:Over the last two years we started off with scripts to check if a service was running, and restart it if it was stopped, nowadays we're installing .net code directly to the GAC using enterprise services remotely across the network for some custom code products. The great part about powershell is if you keep it all in the same general area where everyone can look at it, you just build and build on top of eachother's work. PSM1's allow for a pretty impressive level of function documentation which you can pull up directly from the ISE and is fully searchable, which is really handy if you know someone wrote a function to do a complex task but can't remember what it's named. It also helps prevent you from writing potentially buggy functions, or troubleshooting something that someone else has already figured out. Microsoft lets you make ps modules, but we've found using functions directly from a psm1 file is a lot better. Also there is a unit testing framework for PowerShell called Pester, if you're into that kind of thing. I've also been trying to get my co-workers to start putting their scripts in Git. I make sure all of my scripts going forward are in a repo and I push them to our GitLab, but it's a great thing to do. If you don't have a local git server, BitBucket offers free private repos for a team of up to 5 people. They also give unlimited everything for education institutions (you add a .edu email address to your account and it happens automatically). One question though, what is the distinction you're making between a "ps module" vs. "executing functions directly from a psm1 file"? A psm1 is a script module, which you would import into another script. The ISE doesn't even let you directly execute a psm1. Just curious about the terminology and your workflow.
|
# ? Jun 6, 2015 18:17 |
|
IIRC, I maintained all of the scripts for my last windows-centric job inside the module which had the manifest (.psd1) which contained pointers to a bunch of script files (.psm1). Also had an update-manifest function which, well, updated the manifest. I think you had to copy-file .\yourscript.ps1 -destination /remote/path/to/module and then just run update-manifest -module "module name". If you're struggling to get your coworkers to use something like git, I've found it's usually a lot easier to get people to use something a little simpler like CVS or SVN. Getting people to use version control in general is kind of a hassle, but having to explain working dir vs index vs head to people who don't historically use version control makes it even worse. But, with just powershell stuff, I had absolutely no problems with a remote directory with the modules+manifests, subdirectories for the content of each module, and a script that updates the manifests dynamically. Instead of running version control we had automatic VM snapshots for the hosted machine. Definitely not best practice, but it seemed a little unnecessary to configure version control for a bunch of sysadmin scripts. The only Gotchas! were that I needed to modify the new user script (for admins only, anyway) to add the remote module path to $env:PSModulePath and, naturally, all the scripts and tools you write need to be written with the understanding that are to be run at any target, from any machine, with any user (that has the appropriate permissions).
|
# ? Jun 8, 2015 22:51 |
|
Briantist posted:
We execute about 95%+ of our powershell via Tidal scheduling software (cmd.exe /c powershell path/to/script.ps1) or a BMC product I'm not particularly fond of One of the opening lines of each script says code:
code:
code:
|
# ? Jun 9, 2015 00:23 |
|
I've got a little script whose job is to locate and open an excel file. The problem is that my computer is somewhat arsed, and opening excel files directly (eg, via a double click) often triggers an error to the effect of An error occurred in sending the command to the application. This results in Excel opening, but failing to open the file. When I double click the file again, it opens in the already opened Excel. My PS script is hitting the same error: code:
|
# ? Jun 9, 2015 00:46 |
|
Try wrapping each major block of code in a try catch statementcode:
http://blogs.technet.com/b/heyscriptingguy/archive/2010/03/11/hey-scripting-guy-march-11-2010.aspx http://www.leaseweblabs.com/2014/01/print-full-exception-powershell-trycatch-block-using-format-list/ Hadlock fucked around with this message at 01:06 on Jun 9, 2015 |
# ? Jun 9, 2015 01:02 |
|
I think if you're interfacing with Excel you need to create a COM object for the excel instance and then interact with the object. Powershell has deep hooks in to Office using COM objects, you can parse outlook inboxes that way. http://www.lazywinadmin.com/2014/03/powershell-read-excel-file-using-com.html
|
# ? Jun 9, 2015 01:09 |
|
Hey, thanks. I'm not really trying to do anything with the file other than opening it, so the COM object probably isn't required. But the try/catch block looks like the right track - will give it a shot tomorrow.
|
# ? Jun 9, 2015 02:18 |
|
Hadlock posted:
I believe the benefit for creating and maintaining all the components of an actual 'module' instead of pointing the import-module cmdlet towards a bunch of script files is that you could do "Import-Module Hadlock" and gain access to all the content in all of your psm1s. Additionally (IIRC), providing that the module manifest is in your $env:PsModulePath, you don't even need to have the 'import module' in your scripts because it will auto-search any appropriate modules when you attempt to call a cmdlet it isn't familiar with. You could just call saladtime -param -param or whatever, sort of like how you can use Get-ADUser and similar without loading the active directory module every single time.
|
# ? Jun 9, 2015 17:28 |
|
Reiz posted:I believe the benefit for creating and maintaining all the components of an actual 'module' instead of pointing the import-module cmdlet towards a bunch of script files is that you could do "Import-Module Hadlock" and gain access to all the content in all of your psm1s. Additionally (IIRC), providing that the module manifest is in your $env:PsModulePath, you don't even need to have the 'import module' in your scripts because it will auto-search any appropriate modules when you attempt to call a cmdlet it isn't familiar with. You could just call saladtime -param -param or whatever, sort of like how you can use Get-ADUser and similar without loading the active directory module every single time. There are some quirks to module auto-loading. If you're using PowerShell interactively (as a shell), then sure just try to use the cmdlet, but if you're writing a script you should explicitly import the module. It doesn't hurt, and you can also error out using -ErrorAction Stop which is helpful.
|
# ? Jun 9, 2015 20:52 |
|
We maintain about 200 servers on a daily basis, managing which modules are loaded on which machine would be a nightmare; doing Import-Module \\networkpath\module.psm1 at the top of every script (and then giving that path wide read-rights) is a lot easier. This has worked really well for us over the last year as we've been expanding our scripting libraries, which measure about 3000 lines of code all told. You can just run the script on the remote machine using Invoke-Command and know it will always work and is always up to date. Our BMC and Tidal products allow us to pass in special environmental variables or just pass in variables as powershell script parameters, which is great. We have two main psm1 function libraries, one for installing loose code our programmers dream up, and another for administrative functions like password changes, service manipulation, F5 load balancer etc. For error handling we also have at the top of every script code:
Hadlock fucked around with this message at 00:42 on Jun 10, 2015 |
# ? Jun 10, 2015 00:39 |
|
Briantist posted:There are some quirks to module auto-loading. If you're using PowerShell interactively (as a shell), then sure just try to use the cmdlet, but if you're writing a script you should explicitly import the module. It doesn't hurt, and you can also error out using -ErrorAction Stop which is helpful. Yeah I read about this quirk a lot while we were coming up with the whole situation but, to be honest, I never actually ran into any problems with it. But you are right in that generally most of our tools were for shell usage interactively by an admin, we didn't have a ton of scripts running in the background all the time and the ones we did have were generally hard coded. IE: Instead of having a scheduled task for a local script containing " Import-Module "validate-configfile.psm1 , Validate-ConfigFile -path/to/file" we have a script file sitting on a server and the scheduled task is just powershell.exe /unc/path/to/script/file. quote:We maintain about 200 servers on a daily basis, managing which modules are loaded on which machine would be a nightmare; doing Import-Module \\networkpath\module.psm1 at the top of every script (and then giving that path wide read-rights) is a lot easier. Instead of pulling a psm1 file individually with Import-Module, you are pulling a psd1 file which is essentially a wrapper for multiple .psm1 files. You're still, in effect, "pulling" from the UNC path -- you aren't actually physically installing the module on each server. You get a bunch of added functionality with the manifest like the ability to list all of the cmdlets/functions in your module, the ability to set "module scope" $erroractionpreference (and other variables), so you don't need to have $erroractionpreference = 1 at the top of all of your scripts, you can require specific versions of the .NET framework, require that specifics .dlls are registered before importing the module, or require (and, with some effort, dynamically adjust your scripts) based on the version of powershell that the machine running the script has installed. Granted, most of these features are better geared towards interactive tool writing than they are for background scripts, and you clearly have a setup that works for you which is awesome. However, you might want to take a look at the msdn page for module manifests anyway just so you are aware that they are a thing, because they might solve some problems for you in the future: link.
|
# ? Jun 10, 2015 15:23 |
|
Ok, I posted earlier but didn't get around to posting more code. That's ok.. today is another day and I have another thing I'm trying to do with Powershell.. So parsing security logs, Windows event log. We start with an export of the log which looks like this: code:
A line such as code:
code:
This is where things get sticky. code:
code:
How would you go about doing this? Tony Montana fucked around with this message at 07:06 on Jun 11, 2015 |
# ? Jun 11, 2015 07:03 |
|
Is parsing that data from the exported log a hard requirement, or could you pull the audit logs on the fly (this is easy to do with powershell)?
|
# ? Jun 11, 2015 10:50 |
|
i'm trying to do 2 things, both aren't working for me. 1st i want to do something simple. I want to enter a samaccount name and get the extensionattribute1 code:
the 2nd problem i'm having is. I want to automate making an ad user as much as possible I plan on using the job title as the field for copying the rights it needs from a coworker. and the department to place it in the right ou. i think they are pretty easy to solve, yet i'm stumbling with this. any ideas?
|
# ? Jun 11, 2015 13:45 |
|
Bonfire Lit posted:Is parsing that data from the exported log a hard requirement, or could you pull the audit logs on the fly (this is easy to do with powershell)? Using Get-WinEvent or something? You can, sure. You don't have to use the export.
|
# ? Jun 11, 2015 13:46 |
|
Sefal posted:i'm trying to do 2 things, both aren't working for me. Why are you using the Quest AD cmdlets instead of Microsoft's? I would do it with the standard AD cmdlets like this: code:
|
# ? Jun 11, 2015 16:16 |
|
Tony Montana posted:Using Get-WinEvent or something? You can, sure. You don't have to use the export. I'd use an XPath filter for Get-WinEvent: code:
code:
|
# ? Jun 12, 2015 00:14 |
|
You can export the windows event logs as XML right? Powershell is great at parsing XML, json, etc. Don't lose all your hair trying to pull the data out of an unformatted text file.
|
# ? Jun 12, 2015 01:22 |
|
Bonfire Lit posted:XPath filters Ah yes, XPath filters. I was looking at those before, the synatx looked janky and it scared me off. I'm an AD guy, I know LDAP query syntax really well and the SQL equivalents.. so I thought 'how hard can XPath be?'. Yeah well, if you're going to put queries straight into the Event Viewer there are apparently hidden characters (like carriage returns) that need to be found and included and blah blah, gently caress that. BUT, it does look like this is really the way to do it with the native tools.. not only using it in Powershell but in the Event Viewer directly. Thanks for your examples, I'm going to give them a try. Hadlock posted:You can export the windows event logs as XML right? Powershell is great at parsing XML, json, etc. Don't lose all your hair trying to pull the data out of an unformatted text file. Yeah sure, but you can see the parsing I'm trying to do.. how would it be different if the format was XML? You've still gotta do a find based on a string, capture lines around it and then (hopefully) delete them from the source. We can't go the other way because we don't know the names of the users in the log, so we can't specify them and leave the rest.
|
# ? Jun 12, 2015 03:14 |
|
Bonfire Lit posted:I'd use an XPath filter for Get-WinEvent: No, wait.. this is amazing. Thank you kindly! Wow.. been Powershelling pretty hard for a while now but this blew my mind. Thanks again! edit: I called my manager over and told him 'this is going to blast your tits off' and ran the script on a production server. His tits took flight. edit: How would you modify this to show you a single instance of each user? Assume we only want to see the most recent logon for each user.. how would you do it? Tony Montana fucked around with this message at 06:51 on Jun 12, 2015 |
# ? Jun 12, 2015 05:02 |
|
Briantist posted:Why are you using the Quest AD cmdlets instead of Microsoft's? Briantist posted:Why are you using the Quest AD cmdlets instead of Microsoft's? I'm using the Quest AD cmdlets because the get-aduser gives me an error "there are no active directory webservices running...." code:
but i don't know what commands to use to let the created account flow in the right ou based on department and copy the rights through job title. if someone could point me in the right direction This is the script i used to generate a sam account name. now i want to let it flow into creating the user and letting it go in setting the correct rights.
|
# ? Jun 12, 2015 11:25 |
|
Sefal posted:I'm using the Quest AD cmdlets because the get-aduser gives me an error "there are no active directory webservices running...." You can also install Active Directory Web Services on any of the downlevel DCs so that the cmdlets will work against any of them: http://blogs.technet.com/b/ashleymc...ontrollers.aspx
|
# ? Jun 12, 2015 15:52 |
|
Solved a problem in about 10 minutes yesterday: we've been doing an AD restructure at work, which has meant a lot of the old computer objects have been deleted, and we're starting over with laptop numbering, but the numbers aren't consecutive and the laptops are in different OUs. I had been running Get-ADComputer -Filter {Name -like "<CompanyName>-MOB-LT*"} | select Name | sort Name to grab the list and manually find the first one to assign out. I had been avoiding trying to get that automatically as I thought it'd be too complex, mainly because I was thinking the wrong way round - thinking about getting the list of active computers and filtering that down, rather than getting all potential computers within a range of numbers (001 to 999) and using a break after an error is encountered.
|
# ? Jun 13, 2015 09:10 |
|
Looking for a bit of help with a script here. I want to output a repadmin /replsummary into a HTML file and e-mail it to a group. I found a great script which outputs the command into a HTM file with colour coding for the errors, however I cannot get the HTM file to send as the body of an e-mail. Currently the script looks like this: code:
code:
Can anyone suggest a way to get this HTM file into the body of an e-mail? I just want to set it up as a scheduled task to e-mail a team here with AD replication status once a week. Edit: Bolded the parts I am having trouble with. Dravs fucked around with this message at 16:33 on Jun 16, 2015 |
# ? Jun 15, 2015 12:51 |
|
It's been a while since I've messed around with Send-EmailMessage but your problem looks to be that $body isn't what you think it is. Try doing $body | GM. You might need to do something like $body.content, or change it to $body = get-content .\file (with no parenthesis).
|
# ? Jun 15, 2015 14:28 |
|
Thanks, that put me on the right track. I have fixed it now. The line: code:
code:
Anyone feel free to copy this, it is pretty useful if you are lacking any monitoring on your AD environment. Dravs fucked around with this message at 10:20 on Jun 16, 2015 |
# ? Jun 15, 2015 14:58 |
|
Additionally, putting this script into a scheduled task does not work because scheduled tasks is apparently stupid. So I had to also make a batch file to launch powershell and run the script rather than letting scheduled tasks try and do it (which would return a 0x1 error everytime) Batch file looks a bit like this: code:
Dravs fucked around with this message at 10:20 on Jun 16, 2015 |
# ? Jun 15, 2015 16:36 |
|
Dravs posted:Thanks, that put me on the right track. Dravs posted:Additionally, putting this script into a scheduled task does not work because scheduled tasks is apparently stupid. So I had to also make a batch file to launch powershell and run the script rather than letting scheduled tasks try and do it (which would return a 0x1 error everytime) Also, please use code tags instead of quote tags for your code. If we try to quote your post, the quote tags disappear, but the code tags would be included (I had to copy paste to quote them).
|
# ? Jun 15, 2015 20:20 |
|
Briantist posted:Get-Content reads a file line by line. In PowerShell 3+, you can use Get-Content -Raw to read the whole file as a string as well. Thanks for that link, I will look at the other switches for execution policy. I didn't even know code tags existed, I have gone back and changed them all.
|
# ? Jun 16, 2015 10:22 |
|
We're seeing a very strange problem with using winforms in PowerShell. Basically, it works fine in the console host. If we run it in ISE, it also runs fine, but after some random amount of time, ISE will completely lock up. See our StackOverflow post for details (I've added a bounty to it if any of you have ideas and want to answer): http://stackoverflow.com/q/30808084/3905079
|
# ? Jun 18, 2015 19:01 |
|
|
# ? May 10, 2024 14:08 |
|
I need to do some remote admin on an AWS instance using powershell, but the machine I need to do it from can only access the web through a proxy server. Assuming the AWS server is configured correctly am I correct in thinking that all I need to do is specify the proxy details with New-PSSessionOption and feed that into New-PSSession?
|
# ? Jun 30, 2015 22:59 |