Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Portland Sucks
Dec 21, 2004
༼ つ ◕_◕ ༽つ

power crystals posted:

From someone who has effectively the exact same job (.NET guy in the land of industrial controls systems) but has been at it for a number of years, yes, entirely this. Maybe 5% of my actual time is spent architecting stuff, instead it's all about figuring out what people actually want you to do (hint: it is only ever tangentially the same as what they say they want). I've also gotten the exact same speech about "we can't have only you being able to maintain this" speech right after being given a webforms project that emits its entire HTML layout programmatically. With every project I have been ever so slightly nudging us towards a more maintainable architecture including real CI etc and now we're actually in pretty decent shape. Just make sure you can demonstrate why your solution is better than what's already out there for when you get the inevitable "but that's harder, why should we do that?" comment.

If I understand right it sounds like you're building the entire thing in-house, which while messy, does have the advantage of not having to deal with any of the commercial SCADA/HMI packages, which are all completely terrible and I hope you like COM interop, and every database has no foreign keys and often no primary keys either :suicide: On the plus side everybody thinks I am some kind of wizard genius for getting this stuff to work together!

A...Are you future me?

If I could figure out a way to rewrite our major processes to at least allow our database tables to have loving primary keys my life would be so so so much better. :smith:

Just out of curiosity. Are there there things you decided at some point were worth standing your ground for or things that you found were ultimately not worth trying to change?


On my list of things we have zero concept of is:

source control (they have a backups folder that they claim to use but never do).

documentation (if our lead engineer retires we'll probably collapse)

security (lol stuxnet, and every single one of our servers has the same shared login/pass with domain admin privs and its stored in plaintext throughout our entire code base. we've been owned by cryptolockers twice because lateral movement through our production network with escalated privs is child's play)

APIs and reuseable code ("think smarter not harder", copy and pasting code from one place to another is saving time :viggo: )

logging and system monitoring ("our union workers are the best monitoring system you could ask for, they never hesitate to call when something breaks"....*proceeds to get called in a 4am on pager duty because a 10gb log file full of "I'M HERE NOW" debug statements filled up a critical server drive*)

Portland Sucks fucked around with this message at 16:17 on Nov 2, 2017

Adbot
ADBOT LOVES YOU

GoodCleanFun
Jan 28, 2004

Portland Sucks posted:

source control (they have a backups folder that they claim to use but never do).

No source control is a massive problem and should be the first thing you attempt to standardize.

power crystals
Jun 6, 2007

Who wants a belly rub??

Portland Sucks posted:

A...Are you future me?

I mean things aren't too bad where I stand so, maybe it's something to look forward to? On the other hand I have never lived in or near Portland, ironically or otherwise, so ymmv on being past me.

Portland Sucks posted:

Just out of curiosity. Are there there things you decided at some point were worth standing your ground for or things that you found were ultimately not worth trying to change?


On my list of things we have zero concept of is:

source control (they have a backups folder that they claim to use but never do).

documentation (if our lead engineer retires we'll probably collapse)

security (lol stuxnet, and every single one of our servers has the same shared login/pass with domain admin privs and its stored in plaintext throughout our entire code base. we've been owned by cryptolockers twice because lateral movement through our production network with escalated privs is child's play)

APIs and reuseable code ("think smarter not harder", copy and pasting code from one place to another is saving time :viggo: )

logging and system monitoring ("our union workers are the best monitoring system you could ask for, they never hesitate to call when something breaks"....*proceeds to get called in a 4am on pager duty because a 10gb log file full of "I'M HERE NOW" debug statements filled up a critical server drive*)

100% get version control in there first. I at least inherited Source Safe (v6!) which is terrible but it was something. For what it's worth, the chem/mech eng types seem to actually understand the exclusive lock model so TFVC may be an easier sell than git, assuming you can manage the budget for it. We have TFS now and it works well enough. No documentation blows my mind because I work in a lot of pharma places and they live and die by documentation due to the FDA actually having teeth so I'm going to guess this is primarily heavy industry/goods manufacture. Security, well, good news, it's that awful everywhere, though I guess I have yet to see a cryptolocker infestation in the wild and usually it's just local admin not domain. Literally nobody understands principle of least privilege.

What was most effective for me was following whatever best practice myself, and then later being able to point to it and go "hey, look, we saved X hours because I did Y". Documentation would be a good example; it's easy to come back a year later and point to knowing what everything did when the random change orders come in, but you'll have to stick with it until it pays off. Eventually people started listening to me this way even if last time I tried turning on TFS gated checkins I was told that I had to turn it back off because one of the manager's workflow was being impeded by being unable to check in code that wouldn't compile. It also helps that I am one of the most stubborn people to have ever lived :v:

Portland Sucks
Dec 21, 2004
༼ つ ◕_◕ ༽つ

power crystals posted:

I mean things aren't too bad where I stand so, maybe it's something to look forward to? On the other hand I have never lived in or near Portland, ironically or otherwise, so ymmv on being past me.


100% get version control in there first. I at least inherited Source Safe (v6!) which is terrible but it was something. For what it's worth, the chem/mech eng types seem to actually understand the exclusive lock model so TFVC may be an easier sell than git, assuming you can manage the budget for it. We have TFS now and it works well enough. No documentation blows my mind because I work in a lot of pharma places and they live and die by documentation due to the FDA actually having teeth so I'm going to guess this is primarily heavy industry/goods manufacture. Security, well, good news, it's that awful everywhere, though I guess I have yet to see a cryptolocker infestation in the wild and usually it's just local admin not domain. Literally nobody understands principle of least privilege.

What was most effective for me was following whatever best practice myself, and then later being able to point to it and go "hey, look, we saved X hours because I did Y". Documentation would be a good example; it's easy to come back a year later and point to knowing what everything did when the random change orders come in, but you'll have to stick with it until it pays off. Eventually people started listening to me this way even if last time I tried turning on TFS gated checkins I was told that I had to turn it back off because one of the manager's workflow was being impeded by being unable to check in code that wouldn't compile. It also helps that I am one of the most stubborn people to have ever lived :v:

The no documentation thing blows my mind too because everything except our homegrown software is usually attached to 100 page manuscripts of intensely detailed documentation of experimental data, analysis, etc...we make stuff that goes in the sky so if it directly concerns our product it's documented to hell and back.

I think it's just because the slow and steady implementation of software for automation purposes has been so under the radar and off the cuff that no one yet considers it to actually be apart of the actual engineering/manufacturing process. They still just look at it like a fun little toy that engineers use on the side to help them do the real work. Forget the fact that almost every single production process we have is almost entirely automated or monitored by that software at this point. If it isn't specifically in the domain of our product, then obviously it isn't real engineering. :smuggo:

I have my own source control at work through TFS/Git with a lot of redundancy, but since no one else wants to use it and every process engineer has domain admin rights over our production network they're free to just ignore it and overwrite the source willy nilly. If I had the ability to restrict access and force it on them I would.

Pilsner
Nov 23, 2002

Portland Sucks posted:

security (lol stuxnet, and every single one of our servers has the same shared login/pass with domain admin privs and its stored in plaintext throughout our entire code base. we've been owned by cryptolockers twice because lateral movement through our production network with escalated privs is child's play)
Wow, that takes the cake.

B-Nasty
May 25, 2005

mystes posted:

As I stated earlier, all code signing certificates are effectively EV certificates, so this wouldn't be possible. Simply validating that an application really came from a web server at ksjhgsjsghkjgskjhg.com would be pointless (that's what SSL already does)..

They aren't, though. EV certs give you SmartScreen bonus points, but aren't required for desktop apps.

As far as locking a CSC to a website...at least that would help to prevent download hosting sites from injecting malware into installers or even compromised servers from the company. Though, that didn't help CCleaner.

mystes
May 31, 2006

B-Nasty posted:

They aren't, though. EV certs give you SmartScreen bonus points, but aren't required for desktop apps.
There are actual EV code signing certificates, but don't all code signing certificates require actual identity verification similar to EV SSL certificates? What I was trying to say was that just as granting of EV SSL certificates inherently can't be automated, code signing certificates also inherently require manual identity verification.

quote:

As far as locking a CSC to a website...at least that would help to prevent download hosting sites from injecting malware into installers or even compromised servers from the company. Though, that didn't help CCleaner.
If the code signing certificate just had to match the website you were actually downloading from, that would be no different than just having an SSL certificate. Otherwise, you could just grant code signing certificates for domains but the user would have to know what domain the software should have come from, which is probably not realistic. I think the goals of the current system are 1) you want to be able prevent random people from saying they're a major company like Microsoft, and 2) you want to hopefully be able to revoke certificates used to create malware. I don't know how well this system actually works in practice, though.

mystes fucked around with this message at 01:11 on Nov 3, 2017

power crystals
Jun 6, 2007

Who wants a belly rub??

Portland Sucks posted:

The no documentation thing blows my mind too because everything except our homegrown software is usually attached to 100 page manuscripts of intensely detailed documentation of experimental data, analysis, etc...we make stuff that goes in the sky so if it directly concerns our product it's documented to hell and back.

I think it's just because the slow and steady implementation of software for automation purposes has been so under the radar and off the cuff that no one yet considers it to actually be apart of the actual engineering/manufacturing process. They still just look at it like a fun little toy that engineers use on the side to help them do the real work. Forget the fact that almost every single production process we have is almost entirely automated or monitored by that software at this point. If it isn't specifically in the domain of our product, then obviously it isn't real engineering. :smuggo:

I have my own source control at work through TFS/Git with a lot of redundancy, but since no one else wants to use it and every process engineer has domain admin rights over our production network they're free to just ignore it and overwrite the source willy nilly. If I had the ability to restrict access and force it on them I would.

Man it's weird but fascinating hearing about your experiences being awful in a completely different way than mine. We never control the servers in the field, that's always up to the customers' IT (competence touches both ends of the spectrum depending on who it is) and if we so much as look at something that's in production without getting a sign off or at least a verbal "hey the thing isn't working, can you figure out what's up" things would get very bad for us very quickly. I guess that's different industries for you.

Good luck, I guess! It is possible to make this stuff better, it's just an uphill battle.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

mystes posted:

If the code signing certificate just had to match the website you were actually downloading from, that would be no different than just having an SSL certificate. Otherwise, you could just grant code signing certificates for domains but the user would have to know what domain the software should have come from, which is probably not realistic.

If I'm downloading from a first party site, then yes it's no different from having the site's SSL cert. But if the download is served by a third party, it would guarantee that they didn't mess with the content. See: that time SourceForge bundled crapware in some app they distributed.

(And before you say "just don't download from third party sites", some app developers can't or won't afford to serve the binaries themselves, e.g. KeePass)

chippy
Aug 16, 2006

OK I DON'T GET IT
So I recently learned that if you define an EditorTemplate for a certain type (in the MVC framework), you can render editors for a collection of that type just by doing this):

code:
@Html.EditorFor(model => model.Foos)
And the framework will automatically foreach over the collection and render the editor template for each one, adding unique identifiers to the duplicate field names so taht everything gets bound properly.

Nice. I used to do this using an indexed for loop, I didn't realise you could get it for free by using an EditorTemplate.

However, the markup it's generating is creating a <ul> element with each Foo bound in its own <li> (there are none of these in the EditorTemplate. This is loving with the markup on my page. I get the same behaviour if I manually foreach over the collection and render an EditorTempalte for each item. Is there any way to stop it doing this? Or am I just going to have use CSS to stop the list and list items messing with my layout?

chippy fucked around with this message at 14:04 on Nov 3, 2017

Dietrich
Sep 11, 2001

chippy posted:

So I recently learned that if you define an EditorTemplate for a certain type (using MVC framework, you can render editors for a collection of that type just by doing this):

code:
@Html.EditorFor(model => model.Foos)
And the framework will automatically foreach over the collection and render the editor template for each one, adding unique identifiers to the duplicate field names so taht everything gets bound properly.

Nice. I used to do this using an indexed for loop, I didn't realise you could get it for free by using an EditorTemplate.

However, the markup it's generating is creating a <ul> element with each Foo bound in its own <li> (there are none of these in the EditorTemplate. This is loving with the markup on my page. I get the same behaviour if I manually foreach over the collection and render an EditorTempalte for each item. Is there any way to stop it doing this? Or am I just going to have use CSS to stop the list and list items messing with my layout?

Just use CSS. an unordered list is sematically correct html for that situation.

chippy
Aug 16, 2006

OK I DON'T GET IT

Dietrich posted:

Just use CSS. an unordered list is sematically correct html for that situation.

Fair. Cheers!

B-Nasty
May 25, 2005

NihilCredo posted:

If I'm downloading from a first party site, then yes it's no different from having the site's SSL cert.

I would argue that it's still better to have a CSC. Ideally, FooBarCorp would sign its binaries/installers on a reasonably secure development machine/CI server, where their marketing/download site might be running on some insecure shared host or using outdated Wordpress. When I download FooBar-Paint-Installer.exe and see that it is signed by foobarcorp.com, I have a slightly higher confidence than just the fact that my download was served by a server identified as foobarcorp.com.

There are other benefits to signing as well. For one, anti-virus takes advantage of signatures when verifying applications, because it's trivially easy to hide my malware in a process called foobarpaint.exe. Without a signature, you have no way of knowing if you have the real executable or a hacked/imposter version.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



B-Nasty posted:

I would argue that it's still better to have a CSC. Ideally, FooBarCorp would sign its binaries/installers on a reasonably secure development machine/CI server, where their marketing/download site might be running on some insecure shared host or using outdated Wordpress. When I download FooBar-Paint-Installer.exe and see that it is signed by foobarcorp.com, I have a slightly higher confidence than just the fact that my download was served by a server identified as foobarcorp.com.

There are other benefits to signing as well. For one, anti-virus takes advantage of signatures when verifying applications, because it's trivially easy to hide my malware in a process called foobarpaint.exe. Without a signature, you have no way of knowing if you have the real executable or a hacked/imposter version.

Turns out most AV doesn't even bother looking at the installer if it's signed - even with the wrong signature.

chippy
Aug 16, 2006

OK I DON'T GET IT

Dietrich posted:

Just use CSS. an unordered list is sematically correct html for that situation.

Turns out the list items were coming from the BeginCollectionItem (https://www.nuget.org/packages/BeginCollectionItem/) package anyway, another dev had installed and not used it. I removed it and got the same functionality without the UL/LI junk.

mystes
May 31, 2006

B-Nasty posted:

I would argue that it's still better to have a CSC. Ideally, FooBarCorp would sign its binaries/installers on a reasonably secure development machine/CI server, where their marketing/download site might be running on some insecure shared host or using outdated Wordpress. When I download FooBar-Paint-Installer.exe and see that it is signed by foobarcorp.com, I have a slightly higher confidence than just the fact that my download was served by a server identified as foobarcorp.com.
This is currently arguably true but in the context we were discussing, it would stop being true if it were possible to automatically acquire code signing certificates as with let's encrypt, because when your ssl server was hacked it would just request a new code signing certificate.

B-Nasty
May 25, 2005

mystes posted:

This is currently arguably true but in the context we were discussing, it would stop being true if it were possible to automatically acquire code signing certificates as with let's encrypt, because when your ssl server was hacked it would just request a new code signing certificate.

Certs can be revoked when the hack is discovered, and with modern Windows versions, UAC will give you an ugly, un-skippable error message if you try to run an elevated, revoked exe. Also you could use certificate authority auth in your DNS to prevent new certs from being issued after you've obtained your CSC.

I'm not saying any of this would be foolproof, but it would be better than completely unsigned executables. Heck, some applications just list a MD5/SHA1/256 hash on their website, which would require far less effort to manipulate if their server was compromised.

mystes
May 31, 2006

B-Nasty posted:

Certs can be revoked when the hack is discovered, and with modern Windows versions, UAC will give you an ugly, un-skippable error message if you try to run an elevated, revoked exe. Also you could use certificate authority auth in your DNS to prevent new certs from being issued after you've obtained your CSC.
Interesting. I didn't know about certificate authority auth. Is it actually in use? This still doesn't really help that much with cscs though, does it? Are you even required to have a URL in them?

mystes fucked around with this message at 18:55 on Nov 3, 2017

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

mystes posted:

This is currently arguably true but in the context we were discussing, it would stop being true if it were possible to automatically acquire code signing certificates as with let's encrypt, because when your ssl server was hacked it would just request a new code signing certificate.

My understanding is that Let's Encrypt won't issue new certificates for the same site before the old ones expired, exactly to prevent hackers from simply requesting a new cert.

You could tie a Let's Encrypt-like CSC to the combination of website + application's name + version (the latter two as file attributes), and then prevent it from being reissued before expiry in a similar way. Then at least the hackers would need to create a fake update release, which might not be much harder but would certainly be much easier to detect than simply replacing an existing .exe with an infected one.

B-Nasty
May 25, 2005

mystes posted:

Interesting. I didn't know about certificate authority auth. Is it actually in use? This still doesn't really help that much with cscs though, does it? Are you even required to have a URL in them?

CAA is definitely used by Let's Encrypt. It passed a vote to make it mandatory (https://cabforum.org/pipermail/public/2017-March/009988.html), so if a shady CA isn't using it, they potentially risk getting punished by the browser vendors.

CSCs don't currently need a URL, but I was opining that a Let's Encrypt-like service that focused on automatically and somewhat-securely issuing free CSCs (possibly tied to domains) would be way better than what we have now. The CAs could still charge their premium for EV CSCs, but at least there would be an option for application developers that don't want to pay $XXX/year (e.g. open source/freeware.)

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!
I'm using HttpRequestMessage and HttpClient to make an https request to a web service I don't control. I'm getting name mismatch error on validation of the remote SSL cert. Via Fiddler I can clearly see that the host I'm making the call to is listed in the cert's SubjectAltNames. Browsers accept the cert just fine. Does .NET really not accept SubjectAltNames for SSL cert name matching or is something else going on here? I know I can bypass the issue with a RemoteCertificateValidationCallback but I don't want to do that in production.

EssOEss
Oct 23, 2006
128-bit approved
Can you share the URL? I have not encountered such situations before with the .NET HTTP client.

Mr Shiny Pants
Nov 12, 2012
I need some help. I am having a hell of a time getting PocketSphinx to work in my .Net project. I have downloaded the latest masters from Github and compiled the windows executables with VS2015. These all seem to work wonderfully and everything is as it should be. Now to get it to work in my .Net project I need to use PInvoke. The library uses something called SWIG to create the bindings and build a wrapper for the project so that you can use it within VS. This where I am having a hell of a time getting it to work. This probably due to me not understanding some of the stuff it uses and how it Interops.

I built a Linux box running Debian 9 ( after not getting it to compile on Ubuntu 17 ) and have compiled SphinxBase and PocketSphinx. After these are built you have to build a wrapper using SWIG and it will create an libpocketsphinxwrapper.so and the necessary binding files (.cs) in the gen folder. It even builds a test.exe using the mono compiler to test the generated bindings and on my Linux box this works as it should. It seems that you need to use the generated CS in your project and these run the required interop in the background loading the so file to do the actual work.

Now, how do I get it to build a DLL on my Windows machine? I can't use the so file ( right? ) on my Windows box but I have no clue how to build the wrapper on Windows. I've downloaded a Unity project which uses the project, but it also can't seem to find the libpocketsphinxwrapper.dll and complains about it. Fair enough as it is built on OSX and so only has the bundle and not the DLL.

I've seen some stuff posted about having to create a new project in VS with the .i files SWIG has generated and the header files but this is way beyond my current skill it seems. If someone has done this before or has a working wrapper file for Windows I would be much obliged. They keep talking about using CMAKE to build the Windows binaries, but only a regular MAKEFILE is supplied.

I just wanted to use a speechrecognition library....

EssOEss
Oct 23, 2006
128-bit approved
What exactly are you trying to achieve and what is the mechanism you think it will work by? Where do the OS differences come in? (If you want Windows, why are you even dealing with Linux?!)

Mr Shiny Pants
Nov 12, 2012

EssOEss posted:

What exactly are you trying to achieve and what is the mechanism you think it will work by? Where do the OS differences come in? (If you want Windows, why are you even dealing with Linux?!)

I want to use a speechrecognition engine that also works on Linux but which I can develop using Windows. I am currently using the Microsoft stuff, but that only works on Windows.
From the docs, and my understanding thus far, it works like this: You compile SphinxBase and PocketSphinx on Windows or Linux and after doing this you use the SWIG libraries to create PInvoke wrappers for the libraries. This creates a "glue" library which you compile for the platform it runs on: .so, dll or bundle.

In my naive understanding I had hoped that the Linux build stuff would also create a DLL ( using the Mono compiler ) that I could use in my VS project on Windows, this seems to not be the case. So now I am wondering how in the world I can build the SWIG library in Windows.

This is probably a bridge too far.....

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!

EssOEss posted:

Can you share the URL? I have not encountered such situations before with the .NET HTTP client.

I cant really as that would reveal where I work which I dont want to do.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Mr Shiny Pants posted:

I need some help. I am having a hell of a time getting PocketSphinx to work in my .Net project. I have downloaded the latest masters from Github and compiled the windows executables with VS2015. These all seem to work wonderfully and everything is as it should be. Now to get it to work in my .Net project I need to use PInvoke. The library uses something called SWIG to create the bindings and build a wrapper for the project so that you can use it within VS. This where I am having a hell of a time getting it to work. This probably due to me not understanding some of the stuff it uses and how it Interops.

I built a Linux box running Debian 9 ( after not getting it to compile on Ubuntu 17 ) and have compiled SphinxBase and PocketSphinx. After these are built you have to build a wrapper using SWIG and it will create an libpocketsphinxwrapper.so and the necessary binding files (.cs) in the gen folder. It even builds a test.exe using the mono compiler to test the generated bindings and on my Linux box this works as it should. It seems that you need to use the generated CS in your project and these run the required interop in the background loading the so file to do the actual work.

Now, how do I get it to build a DLL on my Windows machine? I can't use the so file ( right? ) on my Windows box but I have no clue how to build the wrapper on Windows. I've downloaded a Unity project which uses the project, but it also can't seem to find the libpocketsphinxwrapper.dll and complains about it. Fair enough as it is built on OSX and so only has the bundle and not the DLL.

I've seen some stuff posted about having to create a new project in VS with the .i files SWIG has generated and the header files but this is way beyond my current skill it seems. If someone has done this before or has a working wrapper file for Windows I would be much obliged. They keep talking about using CMAKE to build the Windows binaries, but only a regular MAKEFILE is supplied.

I just wanted to use a speechrecognition library....

Maybe you can just compile it with mingw gcc?

Mr Shiny Pants
Nov 12, 2012

Joda posted:

Maybe you can just compile it with mingw gcc?

I will give this a shot. I've been using the MS speech stuff because it works out of the box and I must say it works pretty well with my old Xbox 360 Kinect.

Essential
Aug 14, 2003
Anyone using Azure Functions in a production environment? Anyone building a new solution using them?

We're building out some new data services and the thought is to go away from REST api services running on App Services and instead create all this in Azure Functions. Has anyone run into any major roadblocks? It seems new enough that I kind of feel like we're the first one's spinning this up for production, hopefully there's others doing the same thing. I'm not quite understanding where the data access layer lives (or should live) with functions, it seems like it's sort of built into each function itself. The scaling and third party libraries (entity framework etc) is still a bit of a mystery to us.

The thought behind this is the ease of scalability more than anything. I'm wondering if maintenance will end up being worse. Anyways, if anyone has any information they can provide I'd be really thankful.

reversefungi
Nov 27, 2003

Master of the high hat!
Is there a way to map an assembly to an existing codebase?

I was at work and F12ing through some of our classes, which originally inherit from an internal library. When I get to the class that's from the library, it opens up in Object Explorer and shows me some of the information coming from the dll. However, I was talking with a coworker, and he was able to F12 straight into the code itself from this library, however he has no idea how/why this is happening. I have (what I believe is) the code base for this internal library sitting in a separate folder, and I'm wondering if there was a way to simply tell Visual Studio that, when I press F12 here, to take me to the definition in our codebase rather than the repository.

spaced ninja
Apr 10, 2009


Toilet Rascal

The Dark Wind posted:

Is there a way to map an assembly to an existing codebase?

I was at work and F12ing through some of our classes, which originally inherit from an internal library. When I get to the class that's from the library, it opens up in Object Explorer and shows me some of the information coming from the dll. However, I was talking with a coworker, and he was able to F12 straight into the code itself from this library, however he has no idea how/why this is happening. I have (what I believe is) the code base for this internal library sitting in a separate folder, and I'm wondering if there was a way to simply tell Visual Studio that, when I press F12 here, to take me to the definition in our codebase rather than the repository.

You need to make sure you have a debug build for the other dll and you can add references to the pdb files in Options->Debugging->Symbols (or simply copy the pdb files [make sure they were built on your machine so paths are correct] to the output directory.

Also probably need to disable Just My Code debugging Options->Debugging->General 'Enable Just My Code'.

chippy
Aug 16, 2006

OK I DON'T GET IT
So, MVC 5, I'm doing so 'donut-hole' caching on a child action using [OutputCache]. I know when the cache should be invalidated, so I"m doing so on actions which need it, like so:

code:
string path = Url.Action("_MenuItems", "Menu", new { area = "" }); 
Response.RemoveOutputCacheItem(path); 
This correctly invalidates the cache and I see the new version, if I hit the URL directly, but when I call the child action from another view, I still get the cached version.

Anyone know a fix for this? It seems like this has been a problem that people are talking about since MVC 3, surely there's better support in the framework by now?

dick traceroute
Feb 24, 2010

Open the pod bay doors, Hal.
Grimey Drawer

Essential posted:

Anyone using Azure Functions in a production environment? Anyone building a new solution using them?

We're building out some new data services and the thought is to go away from REST api services running on App Services and instead create all this in Azure Functions. Has anyone run into any major roadblocks? It seems new enough that I kind of feel like we're the first one's spinning this up for production, hopefully there's others doing the same thing. I'm not quite understanding where the data access layer lives (or should live) with functions, it seems like it's sort of built into each function itself. The scaling and third party libraries (entity framework etc) is still a bit of a mystery to us.

The thought behind this is the ease of scalability more than anything. I'm wondering if maintenance will end up being worse. Anyways, if anyone has any information they can provide I'd be really thankful.

We don't have them in production (yet), but we are building a reasonably sized system utilizing them. We're making heavy use of blob and queue triggers (they're pretty nice).

So far we've been bitten by trying to use table storage and azure sql manually inside the function. The issue was throttling on the data stores... Since been replaced by binding table storage and sql as function parameters, no further issues.

We have yet to sort out proper unit/integration testing on the functions themselves, but we'll be looking to implement those things soon.

Opulent Ceremony
Feb 22, 2012

dick traceroute posted:

The issue was throttling on the data stores... Since been replaced by binding table storage and sql as function parameters

Are you talking about using the specific available bindings vs conventional coding to a db at a specific ip? https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference

Essential
Aug 14, 2003

dick traceroute posted:

We don't have them in production (yet), but we are building a reasonably sized system utilizing them. We're making heavy use of blob and queue triggers (they're pretty nice).

So far we've been bitten by trying to use table storage and azure sql manually inside the function. The issue was throttling on the data stores... Since been replaced by binding table storage and sql as function parameters, no further issues.

We have yet to sort out proper unit/integration testing on the functions themselves, but we'll be looking to implement those things soon.

Opulent Ceremony posted:

Are you talking about using the specific available bindings vs conventional coding to a db at a specific ip? https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference

Thanks for the info! Yeah I have the same question as Opulent Ceremony, can you elaborate on what that means? What exactly is "sql as function parameters" referring to? Are you talking about passing in a connection object, command object, something else? I may be missing the mark here entirely but the db side of this is quite the mystery to me right now.

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison
Anyone have any experience with sn.exe? Im trying to set up cloud build agents for TeamCity and one of our projects requires strong naming. I can get the correct strong name CSP out of the system to install the PFX into, but even with a password less pfx file, sn.exe prompts for a password (and newer ones dont seem to let you do cute tricks like pipe input to sn).

If anyone has experience with this, Id be grateful because Im pulling whats left of my hair out at this point.

EmmyOk
Aug 11, 2013

Hello, pals, I've been looking at really basic async stuff and found a pretty neat video explaining the very basics. There's one thing he didn't fully explain though, just that it happened. He writes a simple windows form with one button that counts the number of characters in a file. There is also a label that says "File is processing please wait", then the character counting method is called, then the label updates to say "X characters in the file".

The character counting has a five second delay because the guy is demonstrating async with a long task and counting characters is easy to understand but happens instantaneously even on big files p much. The problem with this first program he explains is that when the button is pressed the label won't update with the "please wait" message, the form won't be reactive, and the form can't be moved or resized while it's processing. Then finally when it completes it will update the label with "X characters counted" and can be moved again etc.

He never explains why this is the case though. He shows that when done with an async method you don't have these problems and shows how to write it. How come the first solution had those problems though? Is it just a problem with forms?




This tutorial and he explains the problems and shows them @ 7:00

https://www.youtube.com/watch?v=C5VhaxQWcpE

Thanks in advance!

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



uncurable mlady posted:

Anyone have any experience with sn.exe? Im trying to set up cloud build agents for TeamCity and one of our projects requires strong naming. I can get the correct strong name CSP out of the system to install the PFX into, but even with a password less pfx file, sn.exe prompts for a password (and newer ones dont seem to let you do cute tricks like pipe input to sn).

If anyone has experience with this, Id be grateful because Im pulling whats left of my hair out at this point.

Can you delay signing on your automated builds and do something like https://docs.microsoft.com/en-us/dotnet/framework/app-domains/delay-sign-assembly ?

EmmyOk posted:

Hello, pals, I've been looking at really basic async stuff and found a pretty neat video explaining the very basics. There's one thing he didn't fully explain though, just that it happened. He writes a simple windows form with one button that counts the number of characters in a file. There is also a label that says "File is processing please wait", then the character counting method is called, then the label updates to say "X characters in the file".

The character counting has a five second delay because the guy is demonstrating async with a long task and counting characters is easy to understand but happens instantaneously even on big files p much. The problem with this first program he explains is that when the button is pressed the label won't update with the "please wait" message, the form won't be reactive, and the form can't be moved or resized while it's processing. Then finally when it completes it will update the label with "X characters counted" and can be moved again etc.

He never explains why this is the case though. He shows that when done with an async method you don't have these problems and shows how to write it. How come the first solution had those problems though? Is it just a problem with forms?

It's because event handlers run on the UI thread and doing anything will prevent the UI from updating because that thread is busy running your code. If your code takes human-noticeable time to run, it looks like your application is frozen. You have to manually kick off a task on a different thread, which is what async does (sort of), but it used to be much more boiler-platey to do that.

e: Dietrich's explanation is probably more accurate

Munkeymon fucked around with this message at 16:44 on Nov 14, 2017

Dietrich
Sep 11, 2001

By default you got one thread, and the UI updates and your backend code are both sharing it. When you aren't doing Async then you're locking the only thread while the long running operation is occurring, so the text update probably won't actually get processed most of the time.

When you do it async, you're being a better consumer of this limited resource. You let the framework handle the specifics, but you're basically saying "I'm gonna run this process that will take a while, if anything comes up while this is running go ahead and take what you need to handle that, when it's done I'm going to take over again though."

The other approach is to use background threads explicitly, but that gets complicated very quickly and would be best to avoid unless you know what you're doing and how to handle the edge cases. Just use async because 99% of the time it will get you the results you want without having to deal with any of that complexity.

Munkeymon posted:

You have to manually kick off a task on a different thread, which is what async does, but it used to be much more boiler-platey to do that.

Async is not multi-threading, unless it is.

Dietrich fucked around with this message at 16:45 on Nov 14, 2017

Adbot
ADBOT LOVES YOU

EmmyOk
Aug 11, 2013

Dietrich posted:

By default you got one thread, and the UI updates and your backend code are both sharing it. When you aren't doing Async then you're locking the only thread while the long running operation is occurring, so the text update probably won't actually get processed most of the time.

When you do it async, you're being a better consumer of this limited resource. You let the framework handle the specifics, but you're basically saying "I'm gonna run this process that will take a while, if anything comes up while this is running go ahead and take what you need to handle that, when it's done I'm going to take over again though."

The other approach is to use background threads explicitly, but that gets complicated very quickly and would be best to avoid unless you know what you're doing and how to handle the edge cases. Just use async because 99% of the time it will get you the results you want without having to deal with any of that complexity.


Async is not multi-threading, unless it is.

The guy has a follow up video and he explained why starting a new thread was a bad idea even if you knew the workarounds to make everything work as intended. I think he says the final version that "works" is basically pure look that in this case it works as you want.

Your explanation is really really great and I appreciate it a lot! If possible could explain why the text update doesn't typically get processed first?

e: I'm rushing into work soon or I'd try myself but in the given example code if I appended the label rather than replaced it I would get both pieces of text concatenated at the end after the "freezing"?

Munkeymon posted:

It's because event handlers run on the UI thread and doing anything will prevent the UI from updating because that thread is busy running your code. If your code takes human-noticeable time to run, it looks like your application is frozen. You have to manually kick off a task on a different thread, which is what async does (sort of), but it used to be much more boiler-platey to do that.


Thank you also!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply