|
snype
|
# ? Sep 27, 2014 17:45 |
|
|
# ? May 25, 2024 05:13 |
|
am i a terrible programmer for not doing tdd
|
# ? Sep 27, 2014 17:51 |
|
suffix posted:is this or your own server? its my own computer why would i have replicants im going to do data on it so the fastest ill ever need to read anything is well id like it to not be that slow
|
# ? Sep 27, 2014 17:53 |
|
nerdz posted:am i a terrible programmer for not doing tdd no
|
# ? Sep 27, 2014 17:53 |
|
Bloody posted:its my own computer
|
# ? Sep 27, 2014 17:55 |
|
Bloody posted:its my own computer wtf are you doing with that much data in your spare time
|
# ? Sep 27, 2014 17:55 |
|
coffeetable posted:what data is this exactly 250 gigabytes of gzipped csv
|
# ? Sep 27, 2014 17:55 |
|
Valeyard posted:wtf are you doing with that much data in your spare time
|
# ? Sep 27, 2014 17:56 |
|
Bloody posted:250 gigabytes of gzipped csv
|
# ? Sep 27, 2014 17:56 |
|
|
# ? Sep 27, 2014 17:57 |
|
because you can probably work with a random 1% of the rows and make your life a whole lot easier. assuming you define "random" well enough
coffeetable fucked around with this message at 18:00 on Sep 27, 2014 |
# ? Sep 27, 2014 17:58 |
|
also lol at being secretive about what you're working on. someone might steal your idea!!
|
# ? Sep 27, 2014 18:01 |
|
nerdz posted:am i a terrible programmer for not doing tdd The advantages of TDD are: - You think about what your code should do and expose as an interface before jumping in (write it with the user in mind, not the programmer) - Your tests are more about what the code should do than crystallizing what crappy code you wrote is doing right now - It's a shitload more boring to write code after everything is done, than making it part of the design process
|
# ? Sep 27, 2014 18:01 |
|
Bloody posted:250 gigabytes of gzipped csv is this the funny computer master corpus
|
# ? Sep 27, 2014 18:06 |
|
MononcQc posted:The advantages of TDD are: - You write your code in a more testable manner rather than building up a structure that's hard to interpose things into for testing or modification
|
# ? Sep 27, 2014 18:11 |
|
eschaton posted:is this the funny computer master corpus yep! coffeetable posted:because you can probably work with a random 1% of the rows and make your life a whole lot easier. assuming you define "random" well enough nope! coffeetable posted:also lol at being secretive about what you're working on. someone might steal your idea!!
|
# ? Sep 27, 2014 18:11 |
|
Shaggar posted:it doesn't pause the process. VMware runs the same instance on both hosts simultaneously and switches the I/O. its transparent to clients. the only restrictions are that you need the hosts to be on the same disk+network. vmware uses it for both HA (same instance runs on both hosts all the time) and for migrating guests. there's a much simpler reason. reserving the ability to migrate VMs between hosts means spending more money on network infrastructure and storage than you otherwise would. being migrated when a host fails is not part of amazon's SLA. why would they spend a single nickel on something they never promised to customers?
|
# ? Sep 27, 2014 18:34 |
|
btw it is almost certain that AWS VMs are hosted on local storage
|
# ? Sep 27, 2014 18:34 |
|
Bloody posted:where's the data import tool live? is it from visual studio or its installed as part of sql server management studio so it would be in a related sql server start menu folder. Bloody posted:also is 10 billion rows too many to be able to use half decently at all once its loaded it depends on ur server, the data quality, and ur schema. also if this is a local instance of sql express theres a 12gb db size limit.
|
# ? Sep 27, 2014 21:36 |
|
Notorious b.s.d. posted:there's a much simpler reason. yeah but I wouldn't buy cloud services from someone who might reboot their cloud at random.
|
# ? Sep 27, 2014 21:37 |
|
Shaggar posted:yeah but I wouldn't buy cloud services from someone who might reboot their cloud at random. so, you don't buy cloud services then?
|
# ? Sep 27, 2014 21:39 |
|
lol azure's SLA is utter poo pooquote:For Cloud Services, we guarantee that when you deploy two or more role instances in different fault and upgrade domains, your Internet facing roles will have external connectivity at least 99.95% of the time. it's no worse than amazon or google or joyent. because all of their SLAs are poo poo. "if you stand up a service as a geographically distributed entity, some of your poo poo will be up most of the time"
|
# ? Sep 27, 2014 21:41 |
|
99.95% sounds not so bad but that's half a day of downtime a year half a day of downtime in your highly available, multi-datacenter infrastructure lol
|
# ? Sep 27, 2014 21:46 |
|
Notorious b.s.d. posted:99.95% sounds not so bad but that's half a day of downtime a year you could also look at it as 12 hours spread over a year, which is 30 seconds of downtime a day lol
|
# ? Sep 27, 2014 21:51 |
|
Why yes I will totally pay $texas for three nines of service
|
# ? Sep 27, 2014 22:17 |
|
Notorious b.s.d. posted:99.95% sounds not so bad but that's half a day of downtime a year its actually more like 4 1/2 hours. but ok.
|
# ? Sep 27, 2014 22:21 |
|
Notorious b.s.d. posted:99.95% sounds not so bad but that's half a day of downtime a year what do you get if you have no redundancy, like 97%?
|
# ? Sep 27, 2014 23:56 |
|
qntm posted:what do you get if you have no redundancy, like 97%? no SLA at all it can go down any time, for any period. edit: this is how amazon defends its reboots to patch hosts. they have no obligation to you. when you spawn a VM there's no guarantees on performance, reliability, etc etc Notorious b.s.d. fucked around with this message at 00:03 on Sep 28, 2014 |
# ? Sep 27, 2014 23:58 |
|
FamDav posted:its actually more like 4 1/2 hours. sorry, half a business day. but remember, this is the SLA for everything being down. how much poo poo has to be broken that multiple datacenters are simultaneously unavailable? under this SLA microsoft will refund you $0 and pay $0 in penalties in the event that they have a multi-datacenter failure for four and a half hours that takes your business down under normal circumstances, like an entire datacenter going dark with no explanation, you also get $0. or amazon deciding to reboot your instances w/out permission to fix a bug. cloud vendors don't have SLAs, they have agreements on the level of non-service
|
# ? Sep 28, 2014 00:00 |
|
Notorious b.s.d. posted:sorry, half a business day. lol if your business isn't 24/7. E: sorry forgot this is the safe zone/hideout
|
# ? Sep 28, 2014 00:23 |
|
MononcQc posted:lol if your business isn't 24/7. was going to say this. was also going to ask what you guys do wrt sla but then i found out you dont have one quote:E: sorry forgot this is the safe zone/hideout doesnt count gas stymie should know better
|
# ? Sep 28, 2014 01:02 |
|
Bloody posted:its my own computer oh ok, then yeah lol go nuts with any sql server but mysql for certain queries it can be faster to just read the data from cvs or whatever, but then you have to deal with merging and handling larger-than-memroy stuff, so it's worth it to get the sql server to do it for you, make sure you have indices if you want to run ad-hoc queries and have several beefy machines and don't care if you lose data it could be worth it to look in to elasticserach /w kibana. on just one server it will be a lot slower than a real sql server, but it's so easy to make a cluster you can literally do it by accident
|
# ? Sep 28, 2014 01:05 |
|
FamDav posted:was going to say this. was also going to ask what you guys do wrt sla but then i found out you dont have one Yeah. There's only SLA with some customers that asked for one in a contract, but we track all uptime and incidents in https://status.heroku.com/uptime and https://status.heroku.com/
|
# ? Sep 28, 2014 01:45 |
|
heroku's uptime count excludes applications that aren't in fully redundant configurations but given that heroku is backed by aws we can't very well expect them to be more reliable than the underpinning
|
# ? Sep 28, 2014 01:50 |
|
Bloody posted:nope! actually that's a little bit worrying, if your data is so heterogenous that you cant do random sampling, are you going to be able to draw any meaningful conclusions at all
|
# ? Sep 28, 2014 02:15 |
|
Notorious b.s.d. posted:heroku's uptime count excludes applications that aren't in fully redundant configurations It's defined there: https://devcenter.heroku.com/articles/heroku-status#uptime-calculation quote:Heroku is a distributed platform spread across many different datacenters and components. During any given incident, it is rare for all applications running on the platform to be affected. For this reason, we report our uptime as an average derived from the number of affected applications. the "single idled web dyno" that are excluded mean free applications that are automatically swapped out after 1h of inactivity and are not running at the time of the incident. There is no running application excluded, and if your app has a single instance, runs for free, and is hit by an error, this is accounted for in the uptime metric. MononcQc fucked around with this message at 03:31 on Sep 28, 2014 |
# ? Sep 28, 2014 03:29 |
|
tell me about the best way to do c#/.net unit testing visual studio has a bunch of stuff built in that is p cool and easy to use but microsoft so there's probably some framework that fellates you while you work or something Luigi Thirty fucked around with this message at 05:19 on Sep 28, 2014 |
# ? Sep 28, 2014 04:01 |
|
fritz posted:actually that's a little bit worrying, if your data is so heterogenous that you cant do random sampling, are you going to be able to draw any meaningful conclusions at all Dunno yet! I don't want to make any assumptions yet. It's definitely a very hetero set in that it's a whole bunch of low dimensional but similar and potentially related independent time serieses glommed into a single format/collection (for better or worse) I don't really expect to ever do much of anything on the entire set at once - I will be sampling from it for practically everything, just not randomly.
|
# ? Sep 28, 2014 05:52 |
|
Luigi Thirty posted:tell me about the best way to do c#/.net unit testing get R# then use either xunit or nunit. xunit is meant to be better but i'm not sure why exactly use a mocking library, but don't go nuts with your mocks. if you're writing too much code to setup a mock then you can probably just write a fake class instead don't mock out your database. some people say that this means it's not proper Unit Testing anymore but who gives a gently caress
|
# ? Sep 28, 2014 06:34 |
|
|
# ? May 25, 2024 05:13 |
|
c#f#r# what is next im going crazy
|
# ? Sep 28, 2014 07:36 |