Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
mystes
May 31, 2006

nielsm posted:

(Also, fun fact: A base installation of the .NET framework includes a full functioning C# compiler, C:\Windows\Microsoft.NET\Framework\version\csc.exe. Anyone who can create files and run arbitrary programs can use that to compile their own code and do anything PS could be used for.)
Lol, I edited my most to mention csc right before you posted this :).

I think It's pretty easy to just delete csc.exe, though, so maybe don't mention that if you think they might do that.

Adbot
ADBOT LOVES YOU

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
Realistically, unless you hold a decision-making position in your organization there's not going to be any argument you can make that will allow you to counteract the security team, and if you had any power you wouldn't be asking about it in here. They're probably not blocking PowerShell Core, so that might work for you.

mystes
May 31, 2006

Realistically just work around it and be annoyed at how inconvenient it makes things for no reason.

Zaepho
Oct 31, 2013

nielsm posted:

This. If PowerShell lets someone do a thing they should not have permissions to do, it's not PowerShell that's at fault. The permissions on the affected thing were set up were wrong to begin with.

(Also, fun fact: A base installation of the .NET framework includes a full functioning C# compiler, C:\Windows\Microsoft.NET\Framework\version\csc.exe. Anyone who can create files and run arbitrary programs can use that to compile their own code and do anything PS could be used for.)

Bonus being that you can compile a c# executable that just contains and executes some powershell code

Collateral Damage
Jun 13, 2009

Zaepho posted:

they're only making life more difficult rather than more secure.
To be fair this is like 99% of all "IT Security" people.

Dirt Road Junglist
Oct 8, 2010

We will be cruel
And through our cruelty
They will know who we are

anthonypants posted:

Realistically, unless you hold a decision-making position in your organization there's not going to be any argument you can make that will allow you to counteract the security team, and if you had any power you wouldn't be asking about it in here. They're probably not blocking PowerShell Core, so that might work for you.

I do hold a decision making role, actually, which is why I'm gathering info. I'm not trying to unblock Powershell for myself, I'm trying to get the whole organization to pull their head out of their rear end so my team can continue patching Windows without users bitching that they're getting weird errors anytime I push poo poo out.

I think their major damage is that they can't track/log when arbitrary code gets run, but in my experience, most things called by Powershell end up in the Event Log anyway. We have proactive security and heuristic scanners. Risk can be mitigated.

mystes
May 31, 2006

Dirt Road Junglist posted:

I do hold a decision making role, actually, which is why I'm gathering info. I'm not trying to unblock Powershell for myself, I'm trying to get the whole organization to pull their head out of their rear end so my team can continue patching Windows without users bitching that they're getting weird errors anytime I push poo poo out.

I think their major damage is that they can't track/log when arbitrary code gets run, but in my experience, most things called by Powershell end up in the Event Log anyway. We have proactive security and heuristic scanners. Risk can be mitigated.
But what's the threat model? Individual users who are knowledgeable enough to set their own executionpolicy to allow execution of powershell scripts are going to accidentally open ps1 files attached to emails? Malware is going to call powershell from the command without bothering to add -executionpolicy bypass? You want to prevent users from executing powershell code but only in noninteractive sessions and only when they don't use -executionpolicy bypass?

Whatever your company is afraid of, "blocking" powershell through group policies is probably not actually preventing it, so your options are either 1) go further to actually prevent whatever your trying to prevent, 2) give up and unblock powershell, or 3) pat yourselves on the back for pointlessly "blocking" powershell like you're doing now.

mystes fucked around with this message at 16:56 on May 22, 2018

Dirt Road Junglist
Oct 8, 2010

We will be cruel
And through our cruelty
They will know who we are

mystes posted:

But what's the threat model? Individual users who are knowledgeable enough to set their own executionpolicy to allow execution of powershell scripts are going to accidentally open ps1 files attached to emails? Malware is going to call powershell from the command without bothering to add -executionpolicy bypass? You want to prevent users from executing powershell code but only in noninteractive sessions and only when they don't use -executionpolicy bypass?

Whatever your company is afraid of, "blocking" powershell through group policies is probably not actually preventing it, so your options are either 1) go further to actually prevent whatever your trying to prevent, 2) give up and unblock powershell, or 3) pat yourselves on the back for pointlessly "blocking" powershell like you're doing now.

That's exactly my argument. They can't point to any specific threat vector, no matter how hard we hammer on them to do so. And then there's poo poo like Powershell Empire that can run without using the internal executable...like, what are you protecting us from here?

The worst part is that we're using an application whitelisting solution that hard-blocks powershell.exe from running. It's a total shitshow.

mystes
May 31, 2006

Dirt Road Junglist posted:

The worst part is that we're using an application whitelisting solution that hard-blocks powershell.exe from running. It's a total shitshow.
Oh, I had jumped to the conclusion that you were just setting the executionpolicy; in that case it might theoretically have some effect as long as the whitelist is also blocking other means of executing powershell and other means of executing arbitrary code. That might make it harder to argue that it's pointless.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord
Is there any method to replace the first character of the first line of a large file (2GB+) without reading in all the content? In Linux shell sed could do it, but I don't know of an equivalent here.

mystes
May 31, 2006

PierreTheMime posted:

Is there any method to replace the first character of the first line of a large file (2GB+) without reading in all the content? In Linux shell sed could do it, but I don't know of an equivalent here.
Yes but you're probably going to have to use .net stuff.

mystes
May 31, 2006

Assuming it's just ascii or binary or something you should just be able do something like this:

code:
$filename = 'file.txt'
$newfirstcharacter = 'x'
$w = New-Object System.IO.BinaryWriter ([System.IO.File]::Open($filename, 'Open'))
$w.Write([char]$newfirstcharacter)
$w.Close()
If it's unicode or something it's going to be slightly more annoying.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord

mystes posted:

Assuming it's just ascii or binary or something you should just be able do something like this:

code:
$filename = 'file.txt'
$newfirstcharacter = 'x'
$w = New-Object System.IO.BinaryWriter ([System.IO.File]::Open($filename, 'Open'))
$w.Write([char]$newfirstcharacter)
$w.Close()
If it's unicode or something it's going to be slightly more annoying.

Perfect, I’ll try that. It’s csv data that someone apparently decided needed extra special characters*. I’m getting them to clean up their sloppy code but this is a good interim fix.

*Not byte order marks, just random $s and #s.

Potato Salad
Oct 23, 2014

nobody cares


:five: this thread. Every time someone posts a solution I learn something helpful.

Python doesn't open an entire file when preappending text does it?

nielsm
Jun 1, 2009



I'm not sure any filesystem supports prepending data to a file without rewriting the entire file.

What the above code does is turn this:
1234567890
into this:
x234567890

Same length, just the first byte replaced.

When you say "preappending" I think of getting this result instead:
x1234567890

That always requires reading the entire file. You won't need to read it all into memory at once, but you will need to read it all off disk and write it all back. (In Unix you can open() the original file, unlink() the name from the inode while keeping the file open, open() the filename again creating a new file, write the data to prepend, then read blocks of the original file and write those to the new file until EOF. When done, close both original files, and the original file really disappears because there's no more links to the inode in form of either names or file handles. I'm not sure if Windows supports something exactly equivalent.)

scuz
Aug 29, 2003

You can't be angry ALL the time!




Fun Shoe
Real simple one for y'all cuz I'm super new to PowerShell.

I need a script that I can have a scheduled task run that will check to see if a service is running (windows firewall in this case) and send an email if it isn't. I've tested the "send-email" portion which works just fine, I know how to check a service, but I have no idea how to put the two together.

Inspector_666
Oct 7, 2003

benny with the good hair

scuz posted:

Real simple one for y'all cuz I'm super new to PowerShell.

I need a script that I can have a scheduled task run that will check to see if a service is running (windows firewall in this case) and send an email if it isn't. I've tested the "send-email" portion which works just fine, I know how to check a service, but I have no idea how to put the two together.

I assume you're using Get-Service to check the status? If so, you'd just need an if block that checked (Get-Service MpsSvc).status to see if it was "Running" or something else.

code:
$FirewallStatus = (Get-Service MpsSvc).status

if ($FirewallStatus -ne "Running") {
     <SEND EMAIL CODE GOES HERE>
}
I would probably put that all in a try catch block too, just in case the service got real messed up and it caused Get-Service to throw an error.

code:
$FirewallStatus = (Get-Service MpsSvc).status

try {
    if ($FirewallStatus -ne "Running") {
        <SEND EMAIL CODE GOES HERE>
    }
}
catch {
    <SEND EMAIL GOES HERE TOO MAYBE WITH EXTRA INFO THAT THE SERVICE CAN'T BE FOUND>
}

Inspector_666 fucked around with this message at 17:22 on Jun 20, 2018

scuz
Aug 29, 2003

You can't be angry ALL the time!




Fun Shoe
Awesome! Thanks, friend. I sure am using "get-service" so this looks great, I'll report back :glomp:

PBS
Sep 21, 2015

Inspector_666 posted:

I assume you're using Get-Service to check the status? If so, you'd just need an if block that checked (Get-Service MpsSvc).status to see if it was "Running" or something else.

code:
$FirewallStatus = (Get-Service MpsSvc).status

if ($FirewallStatus -ne "Running) {
     <SEND EMAIL CODE GOES HERE>
}
I would probably put that all in a try catch block too, just in case the service got real messed up and it caused Get-Service to throw an error.

code:
$FirewallStatus = (Get-Service MpsSvc).status

try {
    if ($FirewallStatus -ne "Running) {
        <SEND EMAIL CODE GOES HERE>
    }
}
catch {
    <SEND EMAIL GOES HERE TOO MAYBE WITH EXTRA INFO THAT THE SERVICE CAN'T BE FOUND>
}

You're missing a closing " on line 4.

Inspector_666
Oct 7, 2003

benny with the good hair

PBS posted:

You're missing a closing " on line 4.

Aha, I see my clever check to make sure people aren't just blindly copy-pasting code into their terminals worked!


Whoops, fixed it.

sloshmonger
Mar 21, 2013
Has anyone started using Powershell Core? I'm wondering what use cases it does better than Windows Powershell at this stage.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

sloshmonger posted:

Has anyone started using Powershell Core? I'm wondering what use cases it does better than Windows Powershell at this stage.
It's the one you can install on Linux and the one you can submit bugs for on their GitHub but I think for the most part it's supposed to be functionally identical. Since it's PowerShell 6, it'll probably replace the PowerShell from WMF 5.1 at some point.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
Does anyone know of a way to define and validate a schema for associative arrays? I've been defining application settings as associative arrays for configuration-as-code-ness during application deployments, then loading them at runtime.

Ex:
DevSettings.ps1
code:
@{
    WebServer = @{
        AppPoolName = 'FooAppDEV'
        InstallationPath = 'C:\inetpub\wwwroot\FooAppDEV'
        WebConfigValues = @{
             '{PlaceHolder}' = 'value'
        }
        #etc
    }
}
However, as it grows, it's becoming easy to define something incorrectly or miss a required element, and you don't find out until runtime. I could write a whole validation module (with accompanying Pester tests for correctness), but it seems like I'm reinventing the wheel here.

I'm considering just moving the entire thing over to JSON and doing a ConvertFrom-Json. That makes schema creation and validation easy with Newtonsoft's JSON libraries, and the scripts that actually use the values wouldn't have to change at all.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

New Yorp New Yorp posted:

Does anyone know of a way to define and validate a schema for associative arrays? I've been defining application settings as associative arrays for configuration-as-code-ness during application deployments, then loading them at runtime.

Ex:
DevSettings.ps1
code:
@{
    WebServer = @{
        AppPoolName = 'FooAppDEV'
        InstallationPath = 'C:\inetpub\wwwroot\FooAppDEV'
        WebConfigValues = @{
             '{PlaceHolder}' = 'value'
        }
        #etc
    }
}
However, as it grows, it's becoming easy to define something incorrectly or miss a required element, and you don't find out until runtime. I could write a whole validation module (with accompanying Pester tests for correctness), but it seems like I'm reinventing the wheel here.

I'm considering just moving the entire thing over to JSON and doing a ConvertFrom-Json. That makes schema creation and validation easy with Newtonsoft's JSON libraries, and the scripts that actually use the values wouldn't have to change at all.

Powershell has parameter validation:

e.g. https://blogs.technet.microsoft.com/heyscriptingguy/2011/05/15/simplify-your-powershell-script-with-parameter-validation/
code:
Function Foo 
{ 
    Param( 
        [ValidateSet("Tom","Dick","Jane")] 
        [String] 
        $Name 
    , 
        [ValidateRange(21,65)] 
        [Int] 
        $Age 
    , 
        [ValidateScript({Test-Path $_ -PathType 'Container'})] 
        [string] 
        $Path 
    ) 
    Process 
    { 
        write-host "New-Foo" 
    } 
}
If you're scripting the creation of the settings files using powershell, that might be a good way to solve your problem.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Bruegels Fuckbooks posted:

Powershell has parameter validation:

e.g. https://blogs.technet.microsoft.com/heyscriptingguy/2011/05/15/simplify-your-powershell-script-with-parameter-validation/
code:
Function Foo 
{ 
    Param( 
        [ValidateSet("Tom","Dick","Jane")] 
        [String] 
        $Name 
    , 
        [ValidateRange(21,65)] 
        [Int] 
        $Age 
    , 
        [ValidateScript({Test-Path $_ -PathType 'Container'})] 
        [string] 
        $Path 
    ) 
    Process 
    { 
        write-host "New-Foo" 
    } 
}
If you're scripting the creation of the settings files using powershell, that might be a good way to solve your problem.

I don't think I explained it well enough. I want to validate the settings files, not the creation of them. Just like you'd get when pushing a JSON or XML file through a schema validator. "Element X isn't defined", "Element Y should be an array, not a string", etc.

[edit]
I was able to proof of concept doing the whole thing as JSON in about 30 minutes, so it's not a big problem... just rolling it out and convincing them to change formats is going to be annoying.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

New Yorp New Yorp posted:

I was able to proof of concept doing the whole thing as JSON in about 30 minutes, so it's not a big problem... just rolling it out and convincing them to change formats is going to be annoying.

Yeah, using JSON for the settings files is the right thing to do in this situation instead of writing a powershell script that contains the settings(!?)

Triyah
Apr 19, 2005

Dirt Road Junglist posted:

The worst part is that we're using an application whitelisting solution that hard-blocks powershell.exe from running. It's a total shitshow.

That sounds horrible. Who's selling that snake oil to the money men?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
I'm doing some DSC stuff with Azure Automation. I have the basics worked out -- I can upload a configuration, compile it, and onboard a machine.

Now I get to certificates. If I get a PFX file on the machine, I can import it with xPfxImport. Cool, that works. However, Azure Automation has a certificate store built in, which I'd like to use. I can upload a certificate. The certificate is there. I can't figure out how to do anything with it in DSC-land.

How do I get a certificate out of the AA certificate store and imported on a DSC node?

[edit]
Also, if I generate a self-signed cert and export it as a PFX, I get an "Access denied" error when trying to import it to Azure Automation Certificates. No clue why, Google isn't helping.

New Yorp New Yorp fucked around with this message at 17:47 on Jul 19, 2018

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
code:
        $cert = Get-AutomationCertificate -Name 'fooCert' 
        
        $exportedBytes = $cert.Export([System.Security.Cryptography.X509Certificates.X509ContentType]::Pkcs12, $null)
        $certPath = 'C:\temp\fooCert.pfx'

 Script InstallCertificateFromAzure 
        {
            GetScript = { @{ Result = '' } }
            TestScript = {                
                return Test-Path Cert:\LocalMachine\my\$($using:cert.Thumbprint)
            }
            SetScript = {
                if (-not (Test-Path $using:certPath)) {
                    New-Item $using:certPath -ItemType File -Force
                }
                $using:exportedBytes | Set-Content -Encoding Byte -Path $using:certPath
                
                Import-PfxCertificate -FilePath $using:certPath -CertStoreLocation Cert:\LocalMachine\my

                Remove-Item $using:certPath -Force
            }
        }
This solved it. As much as I'd have liked to have used xPfxImport, it didn't quite do what I needed and there just isn't a good way to make a file be temporarily present across multiple resources.

New Yorp New Yorp fucked around with this message at 15:21 on Jul 20, 2018

Dirt Road Junglist
Oct 8, 2010

We will be cruel
And through our cruelty
They will know who we are

Triyah posted:

That sounds horrible. Who's selling that snake oil to the money men?

It's...complicated. There are multiple EntSec teams who all think they know best, and no amount of data seems to sway them from their misguided goal.

The Fool
Oct 16, 2003


Anyone have any suggestions on handling XML templating?

I need to build xml requests to interact with an API and don't really feel like
code:
$request = $start + $userid + $middle + $phoneNumber + $end
is the best way to handle it.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

The Fool posted:

Anyone have any suggestions on handling XML templating?

I need to build xml requests to interact with an API and don't really feel like
code:
$request = $start + $userid + $middle + $phoneNumber + $end
is the best way to handle it.

Two ways of approaching it are:

a) Use string interpolation... e.g. https://kevinmarquette.github.io/2017-01-13-powershell-variable-substitution-in-strings/

code:
@”
<?xml version=”1.0&#8243; encoding=”UTF-8&#8243; ?>
<?dsd href=”recipes.dsd” mce_href=”recipes.dsd”?>
<collection xmlns=”[url]http://recipes.org”[/url] xmlns:xsi=”[url]http://www.w3.org/2001/XMLS[/url]
chema-instance” xsi:schemaLocation=”[url]http://recipes.org[/url] recipes.xsd”>

<description>$($ENV:USERNAME)'s favorite recipes.</description>

<recipe>
<title>Beef Parmesan with Garlic Angel Hair Pasta</title>
<ingredient name=”beef cube steak” amount=”1.5&#8243; unit=”pound” />
<ingredient name=”onion, sliced into thin rings” amount=”1&#8243; />
<ingredient name=”spaghetti sauce” amount=”1&#8243; unit=”jar” />
<ingredient name=”shredded mozzarella cheese” amount=”0.5&#8243; unit=”cup” />
<ingredient name=”angel hair pasta” amount=”12&#8243; unit=”ounce” />
<nutrition calories=”1167&#8243; fat=”23&#8243; carbohydrates=”45&#8243; protein=”32&#8243; />
</recipe>
</collection>
“@
or b)

Use ConvertTo-XML. That'll serialize any .net object as XML, so if you have a powershell object that looks like your request already, you could just use ConvertTo-XML for the actual serialization.

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!
This is probably Powershell 101 but as always with PS I'm having trouble getting fruitful google results. I have a PS script that calls a lot of exe's and cmd's in a sequence, so there's lots of this:

code:
& $cmdPath\foo.cmd here are some $maybedynamic params
if($LASTEXITCODE -ne 0)
{
    throw "foo.cmd failed, bailing out of the script now"
}
What's the best way to abstract this into a one-liner? Storing the whole command into a string and then &'ing it inside a function isn't working, it complains "'C:\scripts\foo.cmd here are some bar params' is not the name of a script, executable blah blah blah". Same thing if I pass the path to the cmd and the params as separate strings. I don't want to deal with Start-Process shenanigans if I can avoid it. So far the best I've done is a two-liner:

code:
function Assert-ExitCode
{
    param (
        $errorMessage
    )

    if($LASTEXITCODE -ne 0)
    {
        throw $errorMessage
    }
}
...
& $cmdPath\foo.cmd here are some $maybedynamic params
Assert-ExitCode "foo.cmd failed, bailing out of the script now"

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Eggnogium posted:

This is probably Powershell 101 but as always with PS I'm having trouble getting fruitful google results. I have a PS script that calls a lot of exe's and cmd's in a sequence, so there's lots of this:

code:
& $cmdPath\foo.cmd here are some $maybedynamic params
if($LASTEXITCODE -ne 0)
{
    throw "foo.cmd failed, bailing out of the script now"
}
What's the best way to abstract this into a one-liner? Storing the whole command into a string and then &'ing it inside a function isn't working, it complains "'C:\scripts\foo.cmd here are some bar params' is not the name of a script, executable blah blah blah". Same thing if I pass the path to the cmd and the params as separate strings. I don't want to deal with Start-Process shenanigans if I can avoid it. So far the best I've done is a two-liner:

code:
function Assert-ExitCode
{
    param (
        $errorMessage
    )

    if($LASTEXITCODE -ne 0)
    {
        throw $errorMessage
    }
}
...
& $cmdPath\foo.cmd here are some $maybedynamic params
Assert-ExitCode "foo.cmd failed, bailing out of the script now"

This is off the top of my head, but you can make a function pipeline-aware like this:
code:
function Assert-ExitCode
{
    [CmdletBinding()]
    param (
        [parameter(ValueFromPipeline)]
        $errorMessage
    )

    if($LASTEXITCODE -ne 0)
    {
        throw $errorMessage
    }
}

Then do & $cmdPath\foo.cmd | Assert-ExitCode

caveat: may not work, haven't tested

The Fool
Oct 16, 2003


Sanity check.

I am using VSTS to automate a bunch of stuff. I have some scripts that require having credentials to service accounts. I have this as a preliminary solution, but want to make sure I'm not being totally terrible.

I use this function to generate an encrypted string and key:
code:
function New-EnvCredentials {
    # Get credentials to save
    $sourceCredentials = Get-Credential -Message "Enter credentials you wish to save."

    # generate key
    $key = New-Object Byte[] 32
    [Security.Cryptography.RNGCryptoServiceProvider]::Create().GetBytes($key)

    # generate encrypted string using key
    $password = $sourceCredentials.Password | ConvertFrom-SecureString -Key $key

    # write credentials to screen
    Clear-Host
    Write-Host "Username: $($sourceCredentials.UserName)"
    Write-Host "Password:"
    Write-Host "---"
    Write-Host $password
    Write-Host "---"
    Write-Host "Key: $key"
    Write-Host "---"
    Read-Host -Prompt "Press Enter to clear screen after saving password and key"
    Clear-Host
}
Then I take the username, password, and key and save them as environment variables in VSTS.
Then I have them available as environment variables in the script that needs the credentials, and can use this function to build a credential object.

code:
function Get-EnvCredentials($username, $password, $key) {
    $securePassword = $password | ConvertTo-SecureString -Key $key
    $credentials = New-Object System.Management.Automation.PSCredential -ArgumentList $username, $securePassword
    return $credentials
}

Pile Of Garbage
May 28, 2007



Unless you implement a secrets management solution (e.g. Hashicorp Vault) that is as good as you're going to get and it's certainly far from perfect.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

The Fool posted:

Sanity check.

I am using VSTS to automate a bunch of stuff. I have some scripts that require having credentials to service accounts. I have this as a preliminary solution, but want to make sure I'm not being totally terrible.

I use this function to generate an encrypted string and key:
code:
function New-EnvCredentials {
    # Get credentials to save
    $sourceCredentials = Get-Credential -Message "Enter credentials you wish to save."

    # generate key
    $key = New-Object Byte[] 32
    [Security.Cryptography.RNGCryptoServiceProvider]::Create().GetBytes($key)

    # generate encrypted string using key
    $password = $sourceCredentials.Password | ConvertFrom-SecureString -Key $key

    # write credentials to screen
    Clear-Host
    Write-Host "Username: $($sourceCredentials.UserName)"
    Write-Host "Password:"
    Write-Host "---"
    Write-Host $password
    Write-Host "---"
    Write-Host "Key: $key"
    Write-Host "---"
    Read-Host -Prompt "Press Enter to clear screen after saving password and key"
    Clear-Host
}
Then I take the username, password, and key and save them as environment variables in VSTS.
Then I have them available as environment variables in the script that needs the credentials, and can use this function to build a credential object.

code:
function Get-EnvCredentials($username, $password, $key) {
    $securePassword = $password | ConvertTo-SecureString -Key $key
    $credentials = New-Object System.Management.Automation.PSCredential -ArgumentList $username, $securePassword
    return $credentials
}

Why can't you just store them as encrypted values in the build/release definition and pass them into the script when running?

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
If I have an XmlDocument object, how do I export it to an xml file? If I use Out-File or Export-CliXml or [xml]$object.Save('C:\path\to\document.xml'), it doesn't encode any of the punctuation into like &apos; or &quot; which as far as I can tell doesn't conform to standards. It looks like it's correctly encoding &gt; and &lt; and &amp;, so maybe I'm just being paranoid?

nielsm
Jun 1, 2009



Only &lt; &gt; &amp; are "core" to XML, anything else technically has to come from a DTD or other external source. There's nothing wrong with leaving a character unencoded if the meaning is unambigious.

Adbot
ADBOT LOVES YOU

Zaepho
Oct 31, 2013

anthonypants posted:

If I have an XmlDocument object, how do I export it to an xml file? If I use Out-File or Export-CliXml or [xml]$object.Save('C:\path\to\document.xml'), it doesn't encode any of the punctuation into like &apos; or &quot; which as far as I can tell doesn't conform to standards. It looks like it's correctly encoding &gt; and &lt; and &amp;, so maybe I'm just being paranoid?

[xml] $foo = '<xml><thing/></xml>'
$foo.Save('test.xml')

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply