|
The preferred game genres should either have more options, an other option or not be mandatory. TBS for life.
|
# ? Dec 22, 2015 21:18 |
|
|
# ? Jun 3, 2024 20:00 |
|
Jewel posted:Just so you know, if you're developing something like that it sounds a lot like http://codingame.com Quite a few actually.
Not one of them is a realtime 3d team based FPS-style simulation, which is where I am going with this. Most of them are limited to specific programming languages as well. iSurrender posted:The preferred game genres should either have more options, an other option or not be mandatory. TBS for life. Agreed, I'll modify that. Thanks.
|
# ? Dec 23, 2015 02:35 |
|
You may want to clarify whether you mean Alice the ML language, or Alice as in "Java with a GUI from CMU", not that I think you'll get many checking the box anyway.
|
# ? Dec 23, 2015 02:41 |
|
Apologies in advance if this is a stupid question, I am coming from an economics modeling background rather than a pure programming one so some of this might sound really dumb. I'm looking to read about languages that are good at handling operations on large multidimensional arrays, mainly stuff like element and scalar multiplication on giant rear end 5+ dimension arrays. My background on this is in an old modeling language that handles arrays sort of similarly to Fortran, so I am comfortable with arrays looking something like A(1:4,1:10,2:20..1:n) where each comma is a dimension. Ideally I would like to write stuff that is fairly easy to read for non-programmers (ie: not sitting in a bunch of nested Do loops like in old Fortran), where I can type one or a couple of lines to do stuff like: code:
code:
Does this make any sense? I have been reading background on newer versions of Fortran, Python, and Julia and don't know if what I am looking to do is something that is really weird or if it is something that is simple that most languages handle fine, but that I am just getting confused about the different ways multidimensional arrays are defined. Thanks GOOD TIMES ON METH fucked around with this message at 16:42 on Dec 23, 2015 |
# ? Dec 23, 2015 16:36 |
|
http://xray.readthedocs.org/en/stable/why-xray.html
|
# ? Dec 23, 2015 16:39 |
|
This is great thanks e: It is loving weird that it uses the average temperature example that I just made up randomly in my head GOOD TIMES ON METH fucked around with this message at 16:49 on Dec 23, 2015 |
# ? Dec 23, 2015 16:45 |
|
Currently looking to push my career in data analysis to the next level. I've been focusing on R for the past year, but when I'm looking at job postings they seem to require experience in a variety of different languages (R, Python, SQL, Java, HTML). For those who work in the industry: would my time in 2016 be better served in learning advanced stuff in R, or beginner topics in other languages? Thank you very much for your assistance
|
# ? Dec 26, 2015 08:12 |
|
Job postings are wishlists of desired skills, and usually vastly overrepresent the actual knowledge expected. For data analysis, if you know R at a decent level, of those I'd probably learn SQL, with maybe HTML as a distant 3rd. Databases are not at all uncommon for Big Data™ tasks. Python and Java may be useful if you want to apply to a firm using those specifically, but I say those two because they're the most "transferable" skills, in that if you learn SQL and HTML then you can still use that knowledge in R, Python and Java, whereas learning the languages won't teach you that many cross-skills. If you understand machine learning and other data analysis topics, and are a reasonably skilled programmer, you probably won't find it that hard to switch languages if a job or project demands it, or it later becomes clear that R jobs are just too sparse. SQL especially, because if you end up using Apache Spark in your career (you probably will), regardless of whether you use Java, Scala, or Python for it, you can do a lot of the MapReduce work with SQL queries. I don't know how "advanced" you are in R, though, so I don't know exactly what you mean by "advanced R topics." You could probably kill 3 birds with one stone by writing some sort of data-driven webpage with an R backend for your portfolio. (Though I'm not sure how suitable R is for any sane web service backend, I assume it's possible, if not pretty). Linear Zoetrope fucked around with this message at 16:28 on Dec 26, 2015 |
# ? Dec 26, 2015 16:24 |
|
Echoing the above but as someone who also writes job descriptions pay attention to how they're worded. We'll say something like "5 years experience in C++, Java, or C#" and what it means is we want you to be experienced in a compiled OOP language. Which one we don't care because we expect you'll come up to speed on the language specific details and our libraries and idioms are unique to us anyway.
|
# ? Dec 26, 2015 18:51 |
|
If you want a career in data analysis, you need to have some background in statistics and machine learning as well as programming topics. Any idiot can feed numbers into a model and echo the output, but a good modeler will have some sense of which models are appropriate in given circumstances, and whether the assumptions of the model are close enough to being met that the output isn't complete garbage.
|
# ? Dec 26, 2015 22:40 |
|
Is there some CSS wizardry I can add to a page such that units like mm print exactly as specified? I'm printing something with this class:code:
Not only is this too large, but this also seems to be a different aspect ratio (though that might be me just sucking at precise measurement). Is there some way of telling it to print the exact units specified in CSS?
|
# ? Dec 27, 2015 11:33 |
|
Do you have padding in there too? Because if you do, by default the padding is applied on top of width and height, not including. You'll need to change the box model. (If this isn't an issue disregard me, if the rendering is imperfect maybe check print resolution settings, browser zoom, before digging.
|
# ? Dec 27, 2015 19:56 |
|
Maluco Marinero posted:Do you have padding in there too? Because if you do, by default the padding is applied on top of width and height, not including. You'll need to change the box model. (If this isn't an issue disregard me, if the rendering is imperfect maybe check print resolution settings, browser zoom, before digging. Thanks! Looks like this was messing it up. I set the box-sizing of everything to border-box, and it's substantially better. Kind of a pain because now all my measurements are out of wack, but at least I know how to fix it. It also looks like Chrome and Firefox still printed the card too large, but only by a few mm. IE on the other hand, printed it almost exactly correct. It's too small by about 1mm on each side, but I expect I won't be getting any more accurate than that.
|
# ? Dec 27, 2015 21:03 |
|
Anyone have an alternative to Offerpop that they like for Facebook pages/tabs? We use the photo contest, video contest, and general email entry pages. Our contract with Offerpop is up in February (thank loving god).
|
# ? Dec 31, 2015 01:46 |
|
Whats a good IDE/compiler for C in Windows? I´m learning data structures in Java but I want to go over them in C, so I can lear about pointers/memory management, etc. Thanks.
|
# ? Dec 31, 2015 22:33 |
|
I like Jetbrains stuff so even though I've never used it, you could take a look at CLion.
|
# ? Dec 31, 2015 23:00 |
|
Or just use Visual Studio, it's free for non-commercial use and small teams.
|
# ? Dec 31, 2015 23:45 |
|
Don't you have to install mingw/cygwin on windows to use clion?
|
# ? Jan 1, 2016 04:32 |
|
I have a dumb question about compilers. Are register files any faster / slower than L1? Wikipedia is saying that a Link Register (like in ARM / PPC) is faster than having it in main memory, but I imagine that Intel would be smart enough to store the return address or various areas around ESP in L1, because it makes all the sense in the world to. Which makes me wonder if there's a different between L1 / a register in practice. #whoa
|
# ? Jan 2, 2016 03:07 |
|
L1 access time is very close to register access time, AFAIK. There's probably a small difference (because it's bigger than the register file, and a bigger cache is inherently slower), but I'd be surprised if it's more than a cycle or maybe two. I imagine you'd have a hell of a time trying to benchmark it on a modern x86 processor though.
|
# ? Jan 2, 2016 03:42 |
|
Suspicious Dish posted:I have a dumb question about compilers. Are register files any faster / slower than L1? Wikipedia is saying that a Link Register (like in ARM / PPC) is faster than having it in main memory, but I imagine that Intel would be smart enough to store the return address or various areas around ESP in L1, because it makes all the sense in the world to. Registers are to a first approximation about 10 times faster than a L1 cache hit. That said, modern OOO processors are very good at hiding cache hits, so as long as the load can be scheduled reasonably ahead of any data dependency, it's probably going to be indistinguishable from if everything was in registers. Especially in the case of the instruction side; essentially every processor is going to have a call/return predictor at the front end that keeps 8 or so levels of return addresses in a local register file and predicts those about as well as if they were direct jumps.
|
# ? Jan 2, 2016 03:43 |
|
Yeah. I know there was a talk a few pages ago about emulating an architecture with more registers than x86 and I recommended using x64 to get the extra registers, but I think spilling them into ESP+x and letting L1 work its magic would work just as well.
|
# ? Jan 2, 2016 04:31 |
|
Modern x86 renames the top few entries of the stack like any other register.
|
# ? Jan 2, 2016 04:32 |
|
An old friend of mine about 25 years ago said "they should have used Rambus memory its more expensive but essentially faster" Guess what DDR4 is Rambus memory, they finally listened to him, I always knew he was bright. He has worked for MySQL, Sun, Google, Blizzard and LinkedIn. Where as I am writing Code for European Space Agency after bouncing around in unfulfilling roles with Startups.
|
# ? Jan 2, 2016 08:22 |
|
Uh, no? I can't find any evidence that DDR4 is like RDRAM in any way. RDRAM had horrible latency compared to DDR, as well.
|
# ? Jan 2, 2016 08:43 |
|
Suspicious Dish posted:Uh, no? I can't find any evidence that DDR4 is like RDRAM in any way. RDRAM had horrible latency compared to DDR, as well. The point was that when people were using SDRAM RDRam was much better - and was used in the Nintendo 64 Btw.
|
# ? Jan 2, 2016 09:13 |
|
Suspicious Dish posted:Yeah. I know there was a talk a few pages ago about emulating an architecture with more registers than x86 and I recommended using x64 to get the extra registers, but I think spilling them into ESP+x and letting L1 work its magic would work just as well. Coming back to this for a second, the problem with that is you now need to do some fairly complex (i.e. slow) emulation on anything that touches the stack pointer in order to maintain the appropriate invariants (you can't really assume much about how a program written for the emulated processor actually uses its stack) - IIRC the thread suggested the approach of using some static memory location to hold the additional registers, which would still mean that those registers will spend most of their time in L1 (at least as long as they're being actively used). I believe there are performance counters for L1 cache misses, so you might actually be able to profile how effective each strategy is at keeping the relevant data in L1 cache.
|
# ? Jan 2, 2016 11:23 |
|
TheresaJayne posted:The point was that when people were using SDRAM RDRam was much better - and was used in the Nintendo 64 Btw. Wow, the N64. I guess that settles it, Rambus was right all along.
|
# ? Jan 2, 2016 19:45 |
|
Skandranon posted:Wow, the N64. I guess that settles it, Rambus was right all along. what would i know anyway, I am just a woman...
|
# ? Jan 3, 2016 15:52 |
|
TheresaJayne posted:what would i know anyway, I am just a woman... Fortunately we're gender-blind in here! You got caught saying something dumb, it happens to everyone, just move on.
|
# ? Jan 3, 2016 16:33 |
|
Suspicious Dish posted:I have a dumb question about compilers. Are register files any faster / slower than L1? Wikipedia is saying that a Link Register (like in ARM / PPC) is faster than having it in main memory, but I imagine that Intel would be smart enough to store the return address or various areas around ESP in L1, because it makes all the sense in the world to. The actual storage location of the return address doesn't affect throughput at all because the return predictor has its own extremely reliable notion of where that branch is going, so nothing about it holds up speculation. As long as validating the prediction is fast enough that it doesn't hold up the entire instruction window, it's fine, and the only overhead of not having a link register is that there is a store to memory that has to be performed eventually (which of course means additional µ-ops and barriers to memory reordering). That's specific to return addresses, though. The general answer, as SD said, is that the register file is a lot faster than L1. Furthermore, an operand buffer is even faster than the permanent register file, and an out-of-order CPU will rename registers so that producing a value in a register and then using it will make the second instruction's reservation station directly use the first instruction's result, dynamically reconstructing the exact computation dependency graph; I don't believe there are any CPUs that do that kind of renaming/forwarding through memory. However, in many circumstances you're going to be doing enough other things (e.g. loading other values) that will allow the CPU to hide most of the cost of those memory accesses. Like I said before, though, if you really want maximal performance for this emulation you will want to do real register allocation.
|
# ? Jan 3, 2016 19:48 |
|
pseudorandom name posted:Modern x86 renames the top few entries of the stack like any other register. Do you have details for this?
|
# ? Jan 3, 2016 20:01 |
|
Ah, yeah, that makes more sense to me. Would it be valid to not actually ever write the address to physical memory? Because I doubt anything on the rest of your bus is actually going to look at the return address or CPU stack. Also, a lot of this stuff isn't very well documented (it's the part that makes the computer go fast, not the part that's behavior you should write to). Do you know of any documentation that talks about what modern predictors would do? Or does it mostly tend to be through-the-grape-vine, told-at-holiday-parties kind of material?
|
# ? Jan 3, 2016 20:43 |
|
I don't really see how that could be the case. Register renaming works because there's a lack of side effects. Almost every memory tx has to do a lot of bookkeeping, even just signaling out to the other L1 that a write happened or checking the accessed bit in the PTE? Maybe this is one of those times where every stack ever points to memory with NX, cacheable, Exclusive lines and the myriad of other options for memory visibility isn't relevant here. It certainly wouldn't be exclusive to the stack, any L1 dcache entry would look the same. Renaming is huge though. Given the runway there, I'd be surprised if physical registers was actually a bottleneck. Logical registers, execution ports, sure. But I'm not even sure the relevant part of the OoO is capable of putting back pressure in a physical-full situation.
|
# ? Jan 3, 2016 20:47 |
|
Suspicious Dish posted:Also, a lot of this stuff isn't very well documented (it's the part that makes the computer go fast, not the part that's behavior you should write to). Do you know of any documentation that talks about what modern predictors would do? Or does it mostly tend to be through-the-grape-vine, told-at-holiday-parties kind of material?
|
# ? Jan 3, 2016 20:53 |
|
rjmccall posted:Do you have details for this? I know intimate details about one implementation wherein the decoder treats memory addresses of the form esp+k for small k as a register name, which is renamed as you'd expect. There's a spare store-combining buffer kept around that is used to commit writes to those registers to the L1 at retirement, and some logic to detect unexpected clobbers to that range (which typically happen by either writing to one of those addresses via another form of addressing, or because a different thread stole the cache line). It's... messy, and I can only share details because in practice it turned out to be a bad idea. Suspicious Dish posted:Ah, yeah, that makes more sense to me. Would it be valid to not actually ever write the address to physical memory? Because I doubt anything on the rest of your bus is actually going to look at the return address or CPU stack. This depends on what you mean by "physical memory". You must have started a commit to the L1 by retirement, because otherwise you don't have any hope of maintaining coherence in the unlikely case that another thread does access those addresses. That said, highly-volatile thread-local stuff like stacks will tend to stay resident on a L1 or L2, and is reasonably likely to avoid being propagated out to DRAM. Suspicious Dish posted:Also, a lot of this stuff isn't very well documented (it's the part that makes the computer go fast, not the part that's behavior you should write to). Do you know of any documentation that talks about what modern predictors would do? Or does it mostly tend to be through-the-grape-vine, told-at-holiday-parties kind of material? You may enjoy reading Agner Fog's, which he reverse engineered using direct timing measurements and his own understanding of processor design. Obviously, I couldn't possibly comment about how closely he managed to infer the design of any particular microarchitecture.
|
# ? Jan 3, 2016 21:02 |
|
JawnV6 posted:This might be through the grape vine, but I'm pretty sure page 190 of this pdf is the kind of thing you're looking for. It's talking about a predictor structure for call stacks and the many many ways it can get confused about the results of other predictions. But I sorta knew the magic words. why do the page numbers start at 157
|
# ? Jan 3, 2016 21:26 |
|
Suspicious Dish posted:why do the page numbers start at 157 The earlier pages are in the earlier issues of the same volume of the journal. This seems to be typical in academic journals.
|
# ? Jan 3, 2016 22:16 |
|
ShoulderDaemon posted:I know intimate details about one implementation wherein the decoder treats memory addresses of the form esp+k for small k as a register name, which is renamed as you'd expect. There's a spare store-combining buffer kept around that is used to commit writes to those registers to the L1 at retirement, and some logic to detect unexpected clobbers to that range (which typically happen by either writing to one of those addresses via another form of addressing, or because a different thread stole the cache line). It's... messy, and I can only share details because in practice it turned out to be a bad idea. Okay. That would've been my guess in the abstract: good in theory, but in practice adds so much complexity that it doesn't pay off. But I've been surprised by hardware before. Intel does use a special unit for pushes and pops, but that's to reduce micro-ops during extended push/pop sequences (common in function prologues and epilogues), and it's just to handle the changes to SP, not the memory effects.
|
# ? Jan 3, 2016 23:42 |
|
|
# ? Jun 3, 2024 20:00 |
|
I didnt really know where to put this, but I think it's alright. Involving webscraping: i need to get a bunch of data (2 lines worth from 400 pages) from Basketball Reference. I think it's kind of a violation of tos, but its not for anything I'll make money off of or anything bad and it shouldn't be that heavy on servers. Will they even notice? What are the ramifications of this anyway?
|
# ? Jan 4, 2016 00:08 |