Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
crashdome
Jun 28, 2011

EssOEss posted:

For an anonymized scenario (because trade secrets and paranoia), think of each client as constantly producing a pile of arbitrary bytes and shipping them to the server for post-processing/correlation and publishing of the results. The server will occasionally send commands to adjust the workloads/configuration of the clients according to the state of the "swarm". <10 clients, max 10 Mbps of data produced per client, max 1-2 seconds latency anywhere.

I do not really know any of those technologies listed above, besides ASP.NET Web API. Hmm... perhaps this is a good opportunity to learn! I think I will make a basic prototype with the different approaches and see what feels most natural to me.

I'd roll my own protocol. Pack everything into byte arrays with the first few bytes indicating functions and transport data. E.g. first four bytes are origin/destination addresses, next two indicate function/purpose of data, next two indicate size of data to follow, big block of data, last two for error correction. You would be able to skip using a heavy library. You could even break up data into several messages. Server commands would be same structure but smaller data to set value points. It would be really fast. No security though that could be done by encrypting any of those before sending. This is very similar purpose to why DNP3 and the legacy ModBus still exist in lieu of WebAPIs and Message Queues.

edit:

EssOEss posted:

DNP3 looks pretty neat, thanks! I am reading the opendnp3 implementation right now - is this the main one that I should be looking at? What implementations would you recommend? Oh wow, opendnp3 even has .NET bindings! I am definitely giving this one a try!

You could use that existing library for sure. You'd have the ability to expand on your functions by using it. Or you could roll your own as I said above if you think the project is too simple and will not have a very bright future.

I have a book on it and it was intimidating but, once you realize it's just a big byte array with each set of bytes serving a specific purpose it becomes very clear. ModBus and other manufacturer's own protocols are superseded by this because you can define the type of data within the message (e.g. is the four bytes of data I am receiving a timestamp? or an Int32? or a UInt32? etc..)

crashdome fucked around with this message at 19:18 on Jan 15, 2016

Adbot
ADBOT LOVES YOU

Literal Hamster
Mar 11, 2012

YOSPOS
I wrote a simple multi-threaded image scraper to try and teach myself how threading works. It mostly works, but the scraper will often seem to download multiples of the same image. Do I have a race condition on my hands?

Relevant code below, source code is here.
code:
public async Task ScrapeAsync (IProgress <ProgressReport> progress)
{
	var host = new Uri (options.Uri);
	var runningTasks = new List <Task>
	{
		// Provide a starting scrape task
		DoScrapeAsync (new LinkItem (options.Uri, null), options.HRefXPathExpression, options.ImageXPathExpression, host, progress)
	};

	// Use a do-while loop so that the loop executes at least once
	do
	{
		runningTasks.Remove (await Task.WhenAny (runningTasks));

		while (queuedLinkItems.Any () && (runningTasks.Count < options.MaxConcurrentOperations))
		{
			var link = queuedLinkItems.Dequeue ();
			runningTasks.Add (DoScrapeAsync (link, options.HRefXPathExpression, options.ImageXPathExpression, host, progress));
		}
	}
	while (runningTasks.Any ());
}
code:
protected async Task DoScrapeAsync (LinkItem link, string hRefXPath, string imageXPath, Uri host, IProgress <ProgressReport> progress)
{
	var page = await ScrapePageAsync (link, hRefXPath, imageXPath, progress);
	if (page == null) return;
	await SaveImagesAsync (await ScrapeImagesAsync (page, host, progress));
}
code:
protected async Task <Page> ScrapePageAsync (LinkItem link, string hRefXPath, string imageXPath, IProgress <ProgressReport> progress)
{
	var document = new HtmlDocument ();
	try
	{
		using (var client = new WebClient ()) document.LoadHtml (await client.DownloadStringTaskAsync (new Uri (link.HRef)));
	}
	catch (WebException)
	{
		return null;
	}
	var linkItems = ScrapeLinkItems (document.DocumentNode.SelectNodes (hRefXPath));
	var imageItems = ScrapeImageItems (document.DocumentNode.SelectNodes (imageXPath));
	finishedLinkItems.Add (link);
	QueueLinksForScraping (linkItems);
	progress?.Report (new ProgressReport (finishedLinkItems.Count, totalLinkItems));
	return new Page (linkItems, imageItems);
}

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
[edit] It's a console app, so yeah it's running on thread pool threads.

Check this out:
http://blogs.msdn.com/b/pfxteam/archive/2012/01/20/10259049.aspx


Basically, your tasks are all running on separate threads, but you've built literally no thread safety into the application so you're hitting race conditions. In WPF/WinForms apps, there's a synchronization context that is captured when you await something, so it can pick back up on the same thread. One thread, no race conditions.

New Yorp New Yorp fucked around with this message at 20:45 on Jan 18, 2016

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy
Your race conditions are probably coming from queuedLinkItems which is just a vanilla Queue<T>. You'll either want to put a lock around the queue access, or change to using ConcurrentQueue<T>.

That said, what you've got written there seems like it could benefit significantly from just relying on the .NET ThreadPool and using Task.Run to schedule work. Your options.MaxConcurrentOperations is almost guaranteed to not be as good (nor as simple to use) as the ramp-up algorithm that .NET uses by default.

EDIT: Also there's no image de-duplication, so if there are multiple of the same image on a page you'll get all of them.

Bognar fucked around with this message at 21:31 on Jan 18, 2016

Literal Hamster
Mar 11, 2012

YOSPOS
Thanks for the advice, threading is pretty :staredog: to me right now.

Ithaqua posted:

[edit] It's a console app, so yeah it's running on thread pool threads.

Check this out:
http://blogs.msdn.com/b/pfxteam/archive/2012/01/20/10259049.aspx


Basically, your tasks are all running on separate threads, but you've built literally no thread safety into the application so you're hitting race conditions. In WPF/WinForms apps, there's a synchronization context that is captured when you await something, so it can pick back up on the same thread. One thread, no race conditions.
Thanks for that article, I didn't realize that the await keyword won't necessarily resume on the original thread in a console app. This means that I will absolutely 'have' to use thread-safe features like lock in my program then, correct?

I've been following along with the Essential C# 6.0 by Addison-Wesley, and the section on using the lock keyword was somewhat confusing. If i were to lock access to my link item queue, like so:
code:
protected void QueueLinksForScraping (IEnumerable <LinkItem> linkItems)
{
	foreach (var item in linkItems)
	{
		lock (queuedLinkItems)
		{
			if (!finishedLinkItems.Contains (item) && !queuedLinkItems.Contains (item)) queuedLinkItems.Enqueue (item);
			totalLinkItems = queuedLinkItems.Count > totalLinkItems ? queuedLinkItems.Count : totalLinkItems; // Total should be reflective of the total number of links scraped
		}
	}
}
Would that prevent concurrent access by multiple threads, and therefore prevent a race condition from occurring?

Bognar posted:

Your race conditions are probably coming from queuedLinkItems which is just a vanilla Queue<T>. You'll either want to put a lock around the queue access, or change to using ConcurrentQueue<T>.

That said, what you've got written there seems like it could benefit significantly from just relying on the .NET ThreadPool and using Task.Run to schedule work. Your options.MaxConcurrentOperations is almost guaranteed to not be as good (nor as simple to use) as the ramp-up algorithm that .NET uses by default.

EDIT: Also there's no image de-duplication, so if there are multiple of the same image on a page you'll get all of them.
The reasoning behind the options.MaxConcurrentOperations property was to limit the amount of load placed on a target domain while it is being scraped. If I relied on the hill-climbing algorithm in the default ThreadPool, I could theoretically wind up with 20 threads performing a scraping task concurrently, couldn't I?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Daysvala posted:

Thanks for the advice, threading is pretty :staredog: to me right now.

Thanks for that article, I didn't realize that the await keyword won't necessarily resume on the original thread in a console app. This means that I will absolutely 'have' to use thread-safe features like lock in my program then, correct?


Look at the System.Collections.Concurrent namespace. There's a thread-safe Queue in there, along with other thread-safe collections.

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

Daysvala posted:

Would that prevent concurrent access by multiple threads, and therefore prevent a race condition from occurring?

You'd need them around every access of queuedLinkItems, so you'd also need it inside ScrapeAsync. This is why using a ConcurrentQueue is typically recommended - it's easy to forget to lock around all usages of a thread-unsafe collection.

Daysvala posted:

The reasoning behind the options.MaxConcurrentOperations property was to limit the amount of load placed on a target domain while it is being scraped. If I relied on the hill-climbing algorithm in the default ThreadPool, I could theoretically wind up with 20 threads performing a scraping task concurrently, couldn't I?

Yeah, the only limit you could impose would be ThreadPool.SetMaxThreads. If you want to do rate limiting in a more idiomatic fashion, use a SemaphoreSlim and use the WaitAsync method before doing work and Release method after finishing.

raminasi
Jan 25, 2005

a last drink with no ice
I am trying to drag-and-drop a DataObject between two instances of the same application, and I'm having the same problem this guy is: trying to my custom data out of the DataObject throws a COMException with "Invalid tymed" as the message. I can make bare text work just fine. What is a tymed and how do I make it valid?

e: Apparently this is the error message you get when you try to shuttle arbitrary .NET objects over, which is not allowed.

raminasi fucked around with this message at 23:47 on Jan 18, 2016

Finster Dexter
Oct 20, 2014

Beyond is Finster's mad vision of Earth transformed.
I'm more than a little miffed that I have to deal with ads in Visual Studio.

https://blogs.msdn.microsoft.com/webdev/2016/01/12/visual-studio-keeps-showing-suggested-extensions/

quote:

In Visual Studio 2015 Update 1 we introduced a mechanism that would analyze a web project and suggest helpful Visual Studio extensions based on what it could find. For instance, if the project was using the Bootstrap CSS framework, it would suggest two very cool extensions specifically for working with Bootstrap.

http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/18/announcing-public-preview-of-visual-studio-marketplace.aspx

quote:

We’re working on enabling commerce and publisher profiles in the future, which will make it hassle-free for publishers to monetize their extensions.

:fuckoff:

Sedro
Dec 31, 2008

Finster Dexter posted:

I'm more than a little miffed that I have to deal with ads in Visual Studio.
It's all part of the Windows 10 experience

SixPabst
Oct 24, 2006

Finster Dexter posted:

I'm more than a little miffed that I have to deal with ads in Visual Studio.

Let's hope it's more of an app store or something for extensions instead of ads.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

mintskoal posted:

Let's hope it's more of an app store or something for extensions instead of ads.

I'm 100% certain that's the case.

EssOEss
Oct 23, 2006
128-bit approved
Yeah, no. Even the new Azure portal is chock full of loving ads. Try to create a new resource, you get some promotions instead of the actually useful stuff. Click under storage expecting to create a storage account? Hah! Promotions is all you get on the 1st page, you have to type "Storage" into the search box and search for it... while in the storage section... to create a new storage account.

Having created some resource, quite a few of them have blatant ads in their "Action items" or whatever they are called. Drill down into a Web App and what do you see? Yes, at the top of one of the sections you have Badass Security or whatever it was, clicking on which just leads you to purchase some fancy security addon product/service. Some Zend PHP thingamajig also got added into the action items recently with great marketing fanfare.

Ochowie
Nov 9, 2007

Ithaqua posted:

mintskoal posted:

Let's hope it's more of an app store or something for extensions instead of ads.

I'm 100% certain that's the case.

Even so, paying upwards of $1000 and then seeing ads is ridiculous. Same with Azure. If Google can make a basically ad free experience in their cloud platform why can't Microsoft?

Ochowie fucked around with this message at 07:41 on Jan 22, 2016

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison

Ochowie posted:

I'm 100% certain that's the case.

Even so, paying upwards of $1000 and then seeing ads is ridiculous. Same with Microsoft. If Google can make a basically ad free experience in their cloud platform why can't Microsoft?
[/quote]

Because Microsoft is run by insane howling gibbons who demand maximal revenue from every possible source.

Ochowie
Nov 9, 2007

uncurable mlady posted:

Because Microsoft is run by insane howling gibbons who demand maximal revenue from every possible source.

So much for the new Microsoft.

Calidus
Oct 31, 2011

Stand back I'm going to try science!
Anyone ever had to use the QuickBooks .NET API/SDK? General thoughts? Easy to use?

Literal Hamster
Mar 11, 2012

YOSPOS
Is the following (unit test code) bad practice?
code:
[TestMethod]
public async Task SaveImageAsyncTest ()
{
	var fileHandler = new FileHandler ();

	var result = await fileHandler.SaveImageAsync (null, null);
	Assert.IsFalse (result, $"The method returned {result} when {!result} was expected.");
}
I could use a string literal and write "The method returned 'true' when 'false' was expected." instead, but then what if I decide that I'm actually expecting false instead of true later on?

E: Wrong method return type.

Literal Hamster fucked around with this message at 23:32 on Jan 22, 2016

Sedro
Dec 31, 2008

Daysvala posted:

Is the following (unit test code) bad practice?
Yeah. Your tests should be as straightforward as possible. Avoid generic code, repeat yourself, and don't worry about changing your mind later.

That particular message is worse than writing no message at all. IsFalse could tell you all that. Say something about the behavior like "saving an invalid file handle should fail".

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Daysvala posted:

Is the following (unit test code) bad practice?
code:
[TestMethod]
public async void SaveImageAsyncTest ()
{
	var fileHandler = new FileHandler ();

	var result = await fileHandler.SaveImageAsync (null, null);
	Assert.IsFalse (result, $"The method returned {result} when {!result} was expected.");
}
I could use a string literal and write "The method returned 'true' when 'false' was expected." instead, but then what if I decide that I'm actually expecting false instead of true later on?

Your test will never fail if it's async void. Make it async Task.

As for your question, I don't think it matters. Also, why not make your test name match the intent of the test? "SaveImageAsyncTest" doesn't tell you anything about what the test is testing (other than "something regarding saving images"). You should be able to read the test name and understand the test case.

Literal Hamster
Mar 11, 2012

YOSPOS

Ithaqua posted:

Your test will never fail if it's async void. Make it async Task.

Whoops, sorry, just caught that little mistake.

Sedro posted:

Yeah. Your tests should be as straightforward as possible. Avoid generic code, repeat yourself, and don't worry about changing your mind later.

That particular message is worse than writing no message at all. IsFalse could tell you all that. Say something about the behavior like "saving an invalid file handle should fail".

Okay, thanks. I assumed that unit testing should follow the usual best practices of code re-usability and modularity. I'll provide more specific messages specifying why a unit test failed and what the wrong behavior was going forward.

Metaconcert
Nov 28, 2010

"And my answer is when there are nine"

Ithaqua posted:

You should be able to read the test name and understand the test case.

This, by the way, is something I've found hilariously valuable when altering behaviour in an existing system. I get to use tests to chase down the consequences of what I've done. Conversely, when things aren't named well (I would also accept comments or clear details in version control), it becomes unclear whether the test serves any value, and whether its failure indicates a justifiable behavioural impact or that I've just written a bug into the system.

EssOEss
Oct 23, 2006
128-bit approved
Assert messages are worse than useless - nobody updates them and mostly they will just be copy-pastes from unrelated test cases barely event relevant. I consider their use an anti-pattern.

What the test case checks should be understandable from the name, as said above, and don't bother writing some fancy messages in the asserts. Focus on ironing out a good set of test cases with minimal effort, not making them sparkly and verbose.

IratelyBlank
Dec 2, 2004
The only easy day was yesterday
Does anyone know if NumPy/SciPy work with IronPython in C#? I am able to execute Python code and scripts just fine from within C#, but when I try to use one of my Python scripts that imports numpy, I immidately get an error "No module named numpy". My googling says that this wasn't possible a few years ago but all the questions about using SciPy/NumPy in Ironpython are now years old and it's unclear to me whether or not this is possible.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
Speaking of IronPython... does anyone have any idea when there will be a stable release that supports Python 3?

epswing
Nov 4, 2003

Soiled Meat
It takes about a minute and a half from when I run my ASP project from VS (using IISExpress) to when it loads in the browser. This is, of course, driving me crazy.

I think I have this problem http://stackoverflow.com/questions/12567984/visual-studio-debugging-loading-very-slow

But I've tried everything in that post, with no results.

My Output window is full of lines like this:

'iisexpress.exe' (CLR v4.0.30319: /LM/W3SVC/5/ROOT-1-130982221893696077): Loaded 'C:\Users\Me\AppData\Local\Temp\Temporary ASP.NET Files\root\af371766\cb37d5bc\assembly\dl3\77778309\009fadf8_a257d101\Microsoft.Owin.Security.dll'. Cannot find or open the PDB file.

Anyone seen this before and have any other suggestions?

Cuntpunch
Oct 3, 2003

A monkey in a long line of kings

epalm posted:

It takes about a minute and a half from when I run my ASP project from VS (using IISExpress) to when it loads in the browser. This is, of course, driving me crazy.

I think I have this problem http://stackoverflow.com/questions/12567984/visual-studio-debugging-loading-very-slow

But I've tried everything in that post, with no results.

My Output window is full of lines like this:

'iisexpress.exe' (CLR v4.0.30319: /LM/W3SVC/5/ROOT-1-130982221893696077): Loaded 'C:\Users\Me\AppData\Local\Temp\Temporary ASP.NET Files\root\af371766\cb37d5bc\assembly\dl3\77778309\009fadf8_a257d101\Microsoft.Owin.Security.dll'. Cannot find or open the PDB file.

Anyone seen this before and have any other suggestions?

Let me start by saying that some of this isn't curable. I, too, notice that debugging webapps locally suffers a rather long initial loadtime. But here's some questions to ask:

How much of that time is going into the build itself?

Does this happen ONLY when you're debugging? What happens if you start *without* debugging through VS? Same time? It's probably not a symbols issue at that point.

Are you in an enterprise environment? Do you have a/v scanning going on constantly? Could be during the build-and-deploy-to-local process you're setting off something that has the corporate a/v doing full scans of every single file which increases loadtime.

Cuntpunch fucked around with this message at 01:31 on Jan 26, 2016

Inverness
Feb 4, 2009

Fully configurable personal assistant.

epalm posted:

It takes about a minute and a half from when I run my ASP project from VS (using IISExpress) to when it loads in the browser. This is, of course, driving me crazy.

I think I have this problem http://stackoverflow.com/questions/12567984/visual-studio-debugging-loading-very-slow

But I've tried everything in that post, with no results.

My Output window is full of lines like this:

'iisexpress.exe' (CLR v4.0.30319: /LM/W3SVC/5/ROOT-1-130982221893696077): Loaded 'C:\Users\Me\AppData\Local\Temp\Temporary ASP.NET Files\root\af371766\cb37d5bc\assembly\dl3\77778309\009fadf8_a257d101\Microsoft.Owin.Security.dll'. Cannot find or open the PDB file.

Anyone seen this before and have any other suggestions?
I've had something like this happen with normal apps when debugging is configured to try to load symbols for all modules using a symbol server. Does Just My Code apply here?
I'm not familiar with how this would work for ASP.NET stuff.

darthbob88
Oct 13, 2011

YOSPOS
What's the best way to handle multiple fast TCP requests? I need to send and receive 500 simple TCP requests and responses in 30 seconds. I can send and receive just fine, and send 500 requests in 30 seconds, but I'm having trouble handling those 500 responses quickly enough. Here's my current best solution.
code:
using System;
using System.Net.Sockets;
using System.Text;
using System.Xml;

namespace TCPConnection
{
    class Program
    {
        static void Main(string[] args)
        {
            var responses = new String[500];
            var target = new TcpClient(<TARGET REDACTED>);
            NetworkStream stream = target.GetStream();

            var bar = new XmlDocument();
            for (var i = 1; i <= 500; i++)
            {
                Console.WriteLine(i);
                var newMessage = "<?xml version='1.0' encoding='ISO-8859-1'?><request><requestID>" + i +
                                 "</requestID></request>";
                Byte[] data = Encoding.ASCII.GetBytes(newMessage);
                stream.Write(data, 0, data.Length);
                stream.Flush();

                data = new Byte[256];

                // Read the first batch of the TcpServer response bytes.
                Int32 bytes = stream.Read(data, 0, data.Length);
                String responseData = Encoding.ASCII.GetString(data, 0, bytes);

                bar.LoadXml(responseData);
                responses[i - 1] = bar.DocumentElement.LastChild.InnerText;
            }
            Console.Write(responses);
        }
    }
}
The best plan of attack I can think of is either finding a faster connection, or doing something with async read/write, but how the hell does my async read method know that there's data to read?

darthbob88 fucked around with this message at 02:46 on Jan 26, 2016

EssOEss
Oct 23, 2006
128-bit approved
First of all, your code is conceptually flawed - you already have the problem that there is no way to know when you have read enough data from the stream. There is no basis for assuming that Read() will give you the data you need. It might give you 1 byte, the next call 8 bytes and the next call the final 200 bytes. Or it might split the response up into a bajillion 3 byte segments. Or whatever it wants.

In other words, you cannot assume that data contains your response after Read(). The whole communication protocol here is conceptually flawed - if you use TCP you must have some protocol-defined way to tell when a message begins or ends. For example, HTTP uses two newlines to signal end of headers and the Content-Length header to signal the length of the content that follows. Your protocol needs something equivalent.

What is it that you are actually trying to achieve here? Why not use HTTP? I hope this isn't some already-existing lovely protocol that you have to work with!

darthbob88
Oct 13, 2011

YOSPOS

EssOEss posted:

First of all, your code is conceptually flawed - you already have the problem that there is no way to know when you have read enough data from the stream. There is no basis for assuming that Read() will give you the data you need. It might give you 1 byte, the next call 8 bytes and the next call the final 200 bytes. Or it might split the response up into a bajillion 3 byte segments. Or whatever it wants.

In other words, you cannot assume that data contains your response after Read(). The whole communication protocol here is conceptually flawed - if you use TCP you must have some protocol-defined way to tell when a message begins or ends. For example, HTTP uses two newlines to signal end of headers and the Content-Length header to signal the length of the content that follows. Your protocol needs something equivalent.
Honestly, apart from it taking too long, it's been working fine so far. Though I have had similar problems with async methods, which either read the entire response, or empty bytes.

quote:

What is it that you are actually trying to achieve here? Why not use HTTP? I hope this isn't some already-existing lovely protocol that you have to work with!
No, just an assigned project, to send 500 simple XML requests and receive 500 simple XML responses, within a 30 second window, and then do some further processing on those responses. The processing is simple enough, the sending and receiving is simple enough, it's the 30 second window that's giving me difficulty, given that reading each response is taking about half a second each. Though TBH I'm beginning to think most of that is just network latency, given that there are three states, two mountain ranges, and a time zone between me and the server I'm pinging.

epswing
Nov 4, 2003

Soiled Meat

Cuntpunch posted:

Let me start by saying that some of this isn't curable. I, too, notice that debugging webapps locally suffers a rather long initial loadtime. But here's some questions to ask:

How much of that time is going into the build itself?

Zero.

quote:

Does this happen ONLY when you're debugging? What happens if you start *without* debugging through VS? Same time? It's probably not a symbols issue at that point.

Still takes a long time. Close to a minute.

quote:

Are you in an enterprise environment? Do you have a/v scanning going on constantly? Could be during the build-and-deploy-to-local process you're setting off something that has the corporate a/v doing full scans of every single file which increases loadtime.

Standalone PC. Windows 7, MS Security Essentials. My co-workers are working on the same code from the same repository, and they don't have this issue.

Edit: this is going to be one of those "I suggest you re-install your entire operating system" scenarios isn't it.

epswing fucked around with this message at 16:21 on Jan 26, 2016

EssOEss
Oct 23, 2006
128-bit approved
Run Process Monitor and capture a trace while the slowness is happening. Do you see anything interesting there? Post the trace if it's not sensitive info.

Cuntpunch
Oct 3, 2003

A monkey in a long line of kings

epalm posted:

Still takes a long time. Close to a minute.

Ok, so symbols loading make it worse, but they aren't solely responsible! See, we've ~figured something out~

epalm posted:

Standalone PC. Windows 7, MS Security Essentials. My co-workers are working on the same code from the same repository, and they don't have this issue.

Any other difference in the machines? I mean, are they running with different specs?

Beyond machine stuff, are they running in some separate build config with different IIS configuration?

No Safe Word
Feb 26, 2005

epalm posted:

Zero.


Still takes a long time. Close to a minute.


Standalone PC. Windows 7, MS Security Essentials. My co-workers are working on the same code from the same repository, and they don't have this issue.

Edit: this is going to be one of those "I suggest you re-install your entire operating system" scenarios isn't it.

This all reads like something that should be submitted to Mark Russinovich for his "Case of the Unexplained" series that he presents at every TechEd: https://technet.microsoft.com/en-us/sysinternals/bb963887.aspx

Honestly, you could go watch some of those and maybe get an idea of how to troubleshoot it. To nobody's surprise a lot of them hinge around running SysInternals tools :)

epswing
Nov 4, 2003

Soiled Meat

EssOEss posted:

Run Process Monitor and capture a trace while the slowness is happening. Do you see anything interesting there? Post the trace if it's not sensitive info.

I downloaded and ran Process Monitor, and then ran the ASP project. PM collected 500,000 events. I don't know what's signal and what's noise, but I'm combing through it to see if anything sticks out. I'm actually looking for a line that says "check this checkbox in Visual Studio to solve your specific problem" :)

Edit: a lot of the procmon data seems to be related to files in C:\FusionLogs :confused: no idea what that folder is, or does. The hunt continues.

Cuntpunch posted:

Ok, so symbols loading make it worse, but they aren't solely responsible! See, we've ~figured something out~


Any other difference in the machines? I mean, are they running with different specs?

Beyond machine stuff, are they running in some separate build config with different IIS configuration?

Similar machines, all running Win 7 Pro, SSDs, 12gb of ram, intel i7. Same build config.

No Safe Word posted:

This all reads like something that should be submitted to Mark Russinovich for his "Case of the Unexplained" series that he presents at every TechEd: https://technet.microsoft.com/en-us/sysinternals/bb963887.aspx

Honestly, you could go watch some of those and maybe get an idea of how to troubleshoot it. To nobody's surprise a lot of them hinge around running SysInternals tools :)

I just want to write software :(

epswing fucked around with this message at 18:36 on Jan 26, 2016

No Safe Word
Feb 26, 2005

epalm posted:

I downloaded and ran Process Monitor, and then ran the ASP project. PM collected 500,000 events. I don't know what's signal and what's noise, but I'm combing through it to see if anything sticks out. I'm actually looking for a line that says "check this checkbox in Visual Studio to solve your specific problem" :)

Edit: a lot of the procmon data seems to be related to files in C:\FusionLogs :confused: no idea what that folder is, or does. The hunt continues.
That's apparently some standard .NET assembly binding failure logging thing. There's a viewer app for it as well, so looking in there may tell you why you're having assembly binding issues.

epswing
Nov 4, 2003

Soiled Meat
As described in that original SO post, 9th answer down (I must have missed it), turning off FusionLogs by setting HKLM\Software\Microsoft\Fusion\ForceLog registry value to 0 has solved all my worldly problems.

:phoneb:

Thanks all, I was about to quit and open a lemonade stand outside my house.

epswing fucked around with this message at 18:55 on Jan 26, 2016

No Safe Word
Feb 26, 2005

ProcMon wins again! Seriously the SysInternals stuff is kinda ugly and busy but super freaking useful. Process Explorer has been my Task Manager replacement since Ye Olde WinXP days.

Adbot
ADBOT LOVES YOU

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

darthbob88 posted:

No, just an assigned project, to send 500 simple XML requests and receive 500 simple XML responses, within a 30 second window, and then do some further processing on those responses. The processing is simple enough, the sending and receiving is simple enough, it's the 30 second window that's giving me difficulty, given that reading each response is taking about half a second each. Though TBH I'm beginning to think most of that is just network latency, given that there are three states, two mountain ranges, and a time zone between me and the server I'm pinging.

Unless your latency can be under 60ms your going to have to parallelize this. Has the course gone into this? Using the Task Parallel Library (TPL) with a concurrent collection is probably all you need, unless ordering is important. Just a little async/await will probably get you to doing this in a few seconds.

Don't forget StopWatch for timing!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply