|
There's http://www.syncfusion.com but they have a similar price tag as Kendo Ui.
|
# ? Jun 8, 2015 16:46 |
|
|
# ? May 17, 2024 05:06 |
|
Apparently the stupid WPF bug that prevents you from binding OneWayToSource to read-only properties still exists in .NET 4.5.1. Does anybody know if it's in 4.6?
|
# ? Jun 8, 2015 22:22 |
|
AuxPriest posted:There's http://www.syncfusion.com but they have a similar price tag as Kendo Ui. Thanks, but what makes you say that SyncFusion works with WebSharper? I can't find any mention of either on the other's page The price tag for the UI library I don't mind - my boss has already committed to buying something nice in that area. But a WebSharper wrapper (premium extension) that costs as much as a DevCraft license would be a much harder sell, when there is no such additional cost for MVC; I could probably make the case for purchasing a WebSharper commercial license instead of a UI library, but not on top of one.
|
# ? Jun 8, 2015 22:54 |
I don't know if anyone has experience using Salesforce or their SOAP API, but here is a situation I'm unsure if I understand: Salesforce in their API for integration has a class called sforceservice that establishes a connection to the Salesforce environment. Since our application they only want to use the one user account to do the work and want to minimize the amount of times that login gets called (since we get a limited amount per day), they want to make it a static class for the web app. As I understand it this will make it the same instance across all sessions of this web app, so it only needs to login once every two hours for all users. However the question of thread safety is now a concern of mine. As it is right now, the SforceService is stored in a public static class that only contains it with a private setter, public getter, and a method to login if it is timed out. Another class (We'll call it HelperA) gets created and stored in the HttpContext.Current.Items. It then references SforceService to pass the queries (which are formatted similar to SQL queries) to the class, such as code:
|
|
# ? Jun 9, 2015 19:28 |
|
You might have to implement a token-based web API to differentiate connections with a generic Login/Logout/Timeout and have that do all the calls to a back-end service which remains logged in at all times to Salesforce. Sounds like a nightmare to be honest. Why is there a daily limit to logins for an account which does a butt-load of work? edit: Ack, nevermind. That actually doesn't really help you. crashdome fucked around with this message at 20:01 on Jun 9, 2015 |
# ? Jun 9, 2015 19:54 |
crashdome posted:You might have to implement a token-based web API to differentiate connections with a generic Login/Logout/Timeout and have that do all the calls to a back-end service which remains logged in at all times to Salesforce. Sounds like a nightmare to be honest. Why is there a daily limit to logins for an account which does a butt-load of work? gently caress if I know, Salesforce is dumb? It's a general API call limit, and logging in counts as an API call. I would prefer to not use this but I'm not the one making the calls.
|
|
# ? Jun 9, 2015 19:59 |
|
Ok I hope I'm wording this right, but I want to figure out how to do something, and I'm really dumb when it comes to async/await stuff in general. Currently we are processing csv files (that could contain upwards of 5 million rows) and populating the results into objects that will eventually live in the database. The methods are pretty convoluted and involve multiple steps, which all have to be done in order (for now). Some samples of the code I've inherited below. I feel like it belongs in the coding horrors thread. C# code:
I don't have too much leeway to completely gut everything (i.e., we save those rawData objects to their own table at one point and FK the dbObjects to them) so I can't really skip any of the steps. What I'd like to do, however, is potentially run the first foreach loop until I hit some arbitrary number of rawDataRow objects (say 50k) and immediately kickoff the foreach(var rawDataRow in rawDataRowList) loop with that set while the next 50k rawDatas get transformed into rawDataRows. This should be possible, right?
|
# ? Jun 9, 2015 20:32 |
|
aBagorn posted:What I'd like to do, however, is potentially run the first foreach loop until I hit some arbitrary number of rawDataRow objects (say 50k) and immediately kickoff the foreach(var rawDataRow in rawDataRowList) loop with that set while the next 50k rawDatas get transformed into rawDataRows. Just as a sanity check -- why? I assume you're running some kind of nightly batch processor that does this work? And you have more CPU/RAM on your processing machine to throw at the problem? Are you looking to have the overall job to COMPLETE in a shorter time? Or are you looking to have partial results available immediately for some reason? (I ask because async/await is normally unrelated to these kinds of problems, and it's important to know exactly what+why you're doing...)
|
# ? Jun 9, 2015 21:00 |
|
ljw1004 posted:Just as a sanity check -- why? I'm looking to have the overall job to complete in a shorter time. We're running this in Azure so shorter processing times = less $$$ (I figured I didn't know what I was talking about) As far as resources go, I cranked up the VM this service was running on to a D13 (8 cores 56GB RAM) and we're 3 days into processing a file with 5 million records, which is unacceptable to the client. I have another strong feeling that reaching out to the DB (which lives on another server) during the foreach loops is also potentially a root cause of the problem, as well as improper management of the EF Context
|
# ? Jun 9, 2015 21:11 |
|
5 million records isn't much but how you do it can make a big difference. My first suggestion would be to look into SQL Server Integration Services (SSIS) which is Microsoft's ETL (Extract-Transform-Load) software. It's meant to do this kind of work. When I was doing this kind of work 5 million records would probably be 10-15 minutes if the SSIS package was designed correctly. My next suggestion is can you increase your data locality by caching the database in memory on the server? Going across the network is going to kill your performance especially if you are reaching across the Internet over a VPN. Even a 5ms round trip call (SQL Server on a LAN) for 5 million records is 6.9 hours of waiting on your network. Not counting the call to insert everything (another 6.9 hours). I haven't done much EF but it doesn't seem to be built to load millions of records.
|
# ? Jun 9, 2015 22:11 |
|
gariig posted:I haven't done much EF but it doesn't seem to be built to load millions of records. It's not. I'd love to decouple this all from EF but that would require gutting the service, which isn't an option right now. I could probably use a raw SQL query to pull a selection of Rows that are likely candidates for duplicates at the beginning of the process and put them in memory.
|
# ? Jun 9, 2015 22:20 |
|
aBagorn posted:Ok I hope I'm wording this right, but I want to figure out how to do something, and I'm really dumb when it comes to async/await stuff in general. ugh use a merge into, generate the csv row -> table row client side and just do fire that poo poo off in one query, it'll be one big ol req and you can even have it log dupes ("WHEN MATCHED THEN fart") batching only makes sense if csvrow->dbobject is expensive so you can parallelize the work otherwise you are io bound anyway this looks like it's batching something in 30k chunks already aBagorn posted:I'm looking to have the overall job to complete in a shorter time. We're running this in Azure so shorter processing times = less $$$ jesus christ
|
# ? Jun 9, 2015 22:27 |
|
also you are materializing the stream into lists, so you have a shitload of rows in memory you probs want to stream ienumerables and manipulate using LINQ
|
# ? Jun 9, 2015 22:30 |
|
gariig posted:5 million records isn't much but how you do it can make a big difference. My first suggestion would be to look into SQL Server Integration Services (SSIS) which is Microsoft's ETL (Extract-Transform-Load) software. It's meant to do this kind of work. When I was doing this kind of work 5 million records would probably be 10-15 minutes if the SSIS package was designed correctly. if this is creating a new req for each row i can see how it's taking days (!!!) unless constructing a dbobject from a row is super duper expensive
|
# ? Jun 9, 2015 22:31 |
|
My napkin math was just a guess from looking at aBaragon's code. It looked like the code was doing a dupe check for each object. Even still it was just to show that even a normally cheap operation (a call to a database) done millions of times can be very significant. My 10-15 minute guess is when I was doing ETL at a previous job and we'd do this basic stuff where the whole operation was done in memory. I always did ETL in SSIS but merge into also works. Usually I was doing just enough crazy parsing that I wanted more control over the process. Plus SSIS could do the parse file -> turn into row level data -> insert row level data into SQL Server. I think aBaragon has a lot of GC but it's probably mostly gen0 and way overshadowed by the network calls. If you can I'd try a smaller set of data and use some performance analyzers, both .NET and SQL Server. If that's not possible I'd stop the network calls.
|
# ? Jun 9, 2015 22:44 |
|
aBagorn posted:So the process basically flows file -> list of rawDataObjects -> foreach loop to make list of rawDataRow objects -> foreach loop to transform rawDataRow to dbObjects and save them via EF in batches. If the slowness is at the DB level on insertion due to EF then you should just write your own inserts. SqlBulkCopy class is what you want to use.
|
# ? Jun 10, 2015 06:15 |
|
aBagorn posted:I have another strong feeling that reaching out to the DB (which lives on another server) during the foreach loops is also potentially a root cause of the problem, as well as improper management of the EF Context I 100% guarantee you that this is your problem. Find a way to look for duplicates that doesn't involve a repeated database call. I don't know what your duplicate checking logic is like, but there's a good chance it can be done outside of the DB. Or you can try to do it all inside the DB, just for the love of god don't split it across two machines. aBagorn posted:As far as resources go, I cranked up the VM this service was running on to a D13 (8 cores 56GB RAM) and we're 3 days into processing a file with 5 million records, which is unacceptable to the client. This is probably a waste. There's a significant chance that you're latency bound, which means you're not burning CPU. Have you checked the load average on the VM? aBagorn posted:
Also, just for giggles, isn't this just if (count == 30000)?
|
# ? Jun 10, 2015 14:55 |
|
Bognar posted:I 100% guarantee you that this is your problem. Find a way to look for duplicates that doesn't involve a repeated database call. I don't know what your duplicate checking logic is like, but there's a good chance it can be done outside of the DB. Or you can try to do it all inside the DB, just for the love of god don't split it across two machines. loading data into sql server with dupe detection can and should be done in 1-2 statements (create temp or CTE on a merge into and log matches) god knows what's going on inside EF to generate the row tuple.
|
# ? Jun 10, 2015 17:47 |
|
For hosting ASP.Net Web API 2, I know I can use IIS and apparently I can have it self-host with OWIN. Is there any major advantages to IIS or issues with self-hosting with OWIN? This is on a Windows Embedded system, if that makes any difference.
|
# ? Jun 10, 2015 18:02 |
|
Malcolm XML posted:loading data into sql server with dupe detection can and should be done in 1-2 statements (create temp or CTE on a merge into and log matches) I'm just giving them the benefit of the doubt that maybe there's some special dupe detection logic going on that needed to be done in code and not SQL. That's probably naive, but the code was omitted so I'm not going to make any assumptions.
|
# ? Jun 10, 2015 18:36 |
|
Bognar posted:Also, just for giggles, isn't this just if (count == 30000)?
|
# ? Jun 10, 2015 18:39 |
|
Malcolm XML posted:loading data into sql server with dupe detection can and should be done in 1-2 statements (create temp or CTE on a merge into and log matches) This. Or if you need fancier than you want to do in sql just use ADO.NET or dapper if you are too cool for raw sql. Faldoncow posted:For hosting ASP.Net Web API 2, I know I can use IIS and apparently I can have it self-host with OWIN. Is there any major advantages to IIS or issues with self-hosting with OWIN? This is on a Windows Embedded system, if that makes any difference. Self-hosted probably makes sense there. IIS is great if you need public-facing, hardened, internet ready stuff and/or managability. But self-hosting is great for internal app APIs and IoT poo poo.
|
# ? Jun 10, 2015 18:42 |
|
Kekekela posted:or 60k, or 90k, or 120k, etc Except count is incremented only once each time before the if statement, so it's only ever 30k.
|
# ? Jun 10, 2015 18:49 |
|
Bognar posted:Except count is incremented only once each time before the if statement, so it's only ever 30k. I'm not getting what you mean, this is how I would understand it with non-relevant bits removed: code:
Kekekela fucked around with this message at 20:50 on Jun 10, 2015 |
# ? Jun 10, 2015 18:57 |
|
Wow, I thought count was being set to 0 in that if statement, but I guess I just imagined it.
|
# ? Jun 10, 2015 22:14 |
|
So I think I'm going to make a recommendation to ditch EF if at all possible. This service was written before I got here and dealt with files that contained at most a few thousand rows, and the fact that it was not designed to scale is showing. The only problem I forsee is that the original dev did things like this with EF relationships to the dbObject before inserting. C# code:
And inserts with foreign keys and multiple join tables for many to many relationships. I started looking into BULK insert but it seems like it's going to be multiple steps, especially if we are getting away from inserting these fully hydrated EF objects aBagorn fucked around with this message at 22:51 on Jun 10, 2015 |
# ? Jun 10, 2015 22:41 |
|
aBagorn posted:So I think I'm going to make a recommendation to ditch EF if at all possible. Tell them that fellow CoC poster kekekela agrees with your recommendation.
|
# ? Jun 10, 2015 23:27 |
|
Kekekela posted:"ditch EF" I like this post: "ORM is the Vietnam of computer science" http://blog.codinghorror.com/object-relational-mapping-is-the-vietnam-of-computer-science/
|
# ? Jun 11, 2015 00:07 |
|
aBagorn posted:So I think I'm going to make a recommendation to ditch EF if at all possible. as much as i dislike ORMs i dont think EF is your (only) issue. at worst you can have EF act as a object->sql statement creator but that isn't all that slow even if you have each object insert in its own transaction (5mil transactions is trivial) clearly something is deeply hosed in your validation and creation of the object. fix that first.
|
# ? Jun 11, 2015 00:16 |
|
If EF is a bottleneck in a handful of workflows why not use raw SQL there and keep it where it's not an issue? Bulk inserts and deletes are one thing that EF does pretty poorly but there are optimization tricks that might be good enough (turn off auto change-tracking which literally descends the entire object graph every time an object is changed, commit every 100 records or so, etc).
|
# ? Jun 11, 2015 03:14 |
|
ljw1004 posted:I like this post: "ORM is the Vietnam of computer science" e: that article makes me wonder if I'm going to end up liking NoSql db's more than I'd initially thought, I'm pretty sure I'm about to get my first real-world exposure to them on an upcoming side-project. Kekekela fucked around with this message at 03:50 on Jun 11, 2015 |
# ? Jun 11, 2015 03:47 |
|
Kekekela posted:I didn't mean to come off as anti-EF, just with the issues he's having and the size of the data I think its the wrong tool for the job here. Something that describes everything from key-value stores to document databases to object databases is not really that useful a category. IMO.
|
# ? Jun 11, 2015 03:50 |
|
RICHUNCLEPENNYBAGS posted:Something that describes everything from key-value stores to document databases to object databases is not really that useful a category. IMO. Kekekela fucked around with this message at 03:57 on Jun 11, 2015 |
# ? Jun 11, 2015 03:54 |
|
GoodCleanFun posted:If the slowness is at the DB level on insertion due to EF then you should just write your own inserts. SqlBulkCopy class is what you want to use. Just thought I'd mention that if you end up using SqlBulkCopy (which will save you a lot of time if you are doing a lot of inserts, I made some pretty good speed increases with it recently) and you're working with collections of objects you want to insert, Fast Member (by Marc Gravell of Dapper fame) makes it a little easier. Usually using SqlBulkCopy requires that you construct a datatable first, using FastMember you can basically wrap a collection of objects in a StreamReader and use that instead. It's worked pretty well for me. https://code.google.com/p/fast-member/
|
# ? Jun 11, 2015 14:07 |
|
Question about installing a vNext (or whatever its name is) website on IIS 7: I have a site that works locally, both in IIS Express as in the full version, that I can't get to work on a server. It keeps giving me an "HTTP Error 403.14 - Forbidden The Web server is configured to not list the contents of this directory." error. (I obviously don't want it to list the contents of the directory, I want it to give me my website.) Other websites, made in previous versions of .Net (4.5) do work. I copy/pasted another website in the wwwroot of my new vNext website and that worked right away. I installed .NET 4.6 on the server too, to no avail. Any ideas? edit: just discovered that vNext probably requires IIS 7.5. At least that's what my server told me when I enabled directory browsing and tried to browse to the Views folder. uXs fucked around with this message at 16:44 on Jun 11, 2015 |
# ? Jun 11, 2015 16:30 |
|
chippy posted:Just thought I'd mention that if you end up using SqlBulkCopy (which will save you a lot of time if you are doing a lot of inserts, I made some pretty good speed increases with it recently) and you're working with collections of objects you want to insert, Fast Member (by Marc Gravell of Dapper fame) makes it a little easier. Usually using SqlBulkCopy requires that you construct a datatable first, using FastMember you can basically wrap a collection of objects in a StreamReader and use that instead. It's worked pretty well for me. That sounds pretty handy! Cool.
|
# ? Jun 11, 2015 19:30 |
|
edit; never mind. Finally sort-of fixed it and it's good enough for now.
Sab669 fucked around with this message at 21:03 on Jun 11, 2015 |
# ? Jun 11, 2015 20:08 |
|
uXs posted:Question about installing a vNext (or whatever its name is) website on IIS 7: I'm not sure how you're deploying to IIS, but this could be helpful: http://docs.asp.net/en/latest/publishing/iis.html
|
# ? Jun 11, 2015 22:51 |
|
Cross posting from the Goons for Hire thread, if that's OK. Looking for an ASP.NET MVC/WebAPI dev! http://forums.somethingawful.com/showthread.php?threadid=3246449&pagenumber=18#post446490530
|
# ? Jun 12, 2015 22:50 |
|
|
# ? May 17, 2024 05:06 |
|
Are there any trip reports out there of people's experience working on ASP.NET projects with Visual Studio Code?
|
# ? Jun 13, 2015 20:07 |