Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
iSurrender
Aug 25, 2005
Now with 22% more apathy!
The preferred game genres should either have more options, an other option or not be mandatory. TBS for life.

Adbot
ADBOT LOVES YOU

Azazel
Jun 6, 2001
I bitch slap for a living - you want some?

Jewel posted:

Just so you know, if you're developing something like that it sounds a lot like http://codingame.com

Quite a few actually.


Not one of them is a realtime 3d team based FPS-style simulation, which is where I am going with this. Most of them are limited to specific programming languages as well.

iSurrender posted:

The preferred game genres should either have more options, an other option or not be mandatory. TBS for life.

Agreed, I'll modify that. Thanks.

Linear Zoetrope
Nov 28, 2011

A hero must cook
You may want to clarify whether you mean Alice the ML language, or Alice as in "Java with a GUI from CMU", not that I think you'll get many checking the box anyway.

GOOD TIMES ON METH
Mar 17, 2006

Fun Shoe
Apologies in advance if this is a stupid question, I am coming from an economics modeling background rather than a pure programming one so some of this might sound really dumb.

I'm looking to read about languages that are good at handling operations on large multidimensional arrays, mainly stuff like element and scalar multiplication on giant rear end 5+ dimension arrays. My background on this is in an old modeling language that handles arrays sort of similarly to Fortran, so I am comfortable with arrays looking something like A(1:4,1:10,2:20..1:n) where each comma is a dimension. Ideally I would like to write stuff that is fairly easy to read for non-programmers (ie: not sitting in a bunch of nested Do loops like in old Fortran), where I can type one or a couple of lines to do stuff like:

code:
C(1:4,1:10,2:20..1:n)=A(1:4,1:10,2:20..1:n)*B(1:4,1:10,2:20..1:n)
Also it would be great to be able to use dictionary/keys to define the dimensions and select parts that I want to use, again to make sure other people can comprehend it. Like if I had an array Temperature(1:31,1:12) and could call the first dimension Days and the second Months and do things like MarchTemp(Days)=Temperature(Days,March), where Days is all values (1:31) and March looks up which section to grab (3). Or that I could make an average temperate for each month by doing something like:

code:
AverageTemp(Month)=All(Days)(Temperature(Days,Month)/NumberOfDaysInMonth(Month)
Where the All above just means sum up all the temperatures in the Days dimension for each month so we can divide it by the number of days a month has.

Does this make any sense? I have been reading background on newer versions of Fortran, Python, and Julia and don't know if what I am looking to do is something that is really weird or if it is something that is simple that most languages handle fine, but that I am just getting confused about the different ways multidimensional arrays are defined.

Thanks

GOOD TIMES ON METH fucked around with this message at 16:42 on Dec 23, 2015

SurgicalOntologist
Jun 17, 2004

http://xray.readthedocs.org/en/stable/why-xray.html

GOOD TIMES ON METH
Mar 17, 2006

Fun Shoe

This is great thanks

e: It is loving weird that it uses the average temperature example that I just made up randomly in my head

GOOD TIMES ON METH fucked around with this message at 16:49 on Dec 23, 2015

Mourning Due
Oct 11, 2004

*~ missin u ~*
:canada:
Currently looking to push my career in data analysis to the next level. I've been focusing on R for the past year, but when I'm looking at job postings they seem to require experience in a variety of different languages (R, Python, SQL, Java, HTML).

For those who work in the industry: would my time in 2016 be better served in learning advanced stuff in R, or beginner topics in other languages?

Thank you very much for your assistance

Linear Zoetrope
Nov 28, 2011

A hero must cook
Job postings are wishlists of desired skills, and usually vastly overrepresent the actual knowledge expected. For data analysis, if you know R at a decent level, of those I'd probably learn SQL, with maybe HTML as a distant 3rd. Databases are not at all uncommon for Big Data™ tasks.

Python and Java may be useful if you want to apply to a firm using those specifically, but I say those two because they're the most "transferable" skills, in that if you learn SQL and HTML then you can still use that knowledge in R, Python and Java, whereas learning the languages won't teach you that many cross-skills. If you understand machine learning and other data analysis topics, and are a reasonably skilled programmer, you probably won't find it that hard to switch languages if a job or project demands it, or it later becomes clear that R jobs are just too sparse. SQL especially, because if you end up using Apache Spark in your career (you probably will), regardless of whether you use Java, Scala, or Python for it, you can do a lot of the MapReduce work with SQL queries.

I don't know how "advanced" you are in R, though, so I don't know exactly what you mean by "advanced R topics." You could probably kill 3 birds with one stone by writing some sort of data-driven webpage with an R backend for your portfolio. (Though I'm not sure how suitable R is for any sane web service backend, I assume it's possible, if not pretty).

Linear Zoetrope fucked around with this message at 16:28 on Dec 26, 2015

Hughlander
May 11, 2005

Echoing the above but as someone who also writes job descriptions pay attention to how they're worded. We'll say something like "5 years experience in C++, Java, or C#" and what it means is we want you to be experienced in a compiled OOP language. Which one we don't care because we expect you'll come up to speed on the language specific details and our libraries and idioms are unique to us anyway.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


If you want a career in data analysis, you need to have some background in statistics and machine learning as well as programming topics. Any idiot can feed numbers into a model and echo the output, but a good modeler will have some sense of which models are appropriate in given circumstances, and whether the assumptions of the model are close enough to being met that the output isn't complete garbage.

Duct Tape
Sep 30, 2004

Huh?
Is there some CSS wizardry I can add to a page such that units like mm print exactly as specified? I'm printing something with this class:
code:
.card {
    width: 63mm;
    height: 88mm;
}
and I'm hoping that on the sheet of paper, it will be exactly 63mm wide and 88mm tall. Instead, it appears to be printing with a width of 83mm and a height of 109mm.

Not only is this too large, but this also seems to be a different aspect ratio (though that might be me just sucking at precise measurement). Is there some way of telling it to print the exact units specified in CSS?

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.
Do you have padding in there too? Because if you do, by default the padding is applied on top of width and height, not including. You'll need to change the box model. (If this isn't an issue disregard me, if the rendering is imperfect maybe check print resolution settings, browser zoom, before digging.

Duct Tape
Sep 30, 2004

Huh?

Maluco Marinero posted:

Do you have padding in there too? Because if you do, by default the padding is applied on top of width and height, not including. You'll need to change the box model. (If this isn't an issue disregard me, if the rendering is imperfect maybe check print resolution settings, browser zoom, before digging.

Thanks! Looks like this was messing it up. I set the box-sizing of everything to border-box, and it's substantially better. Kind of a pain because now all my measurements are out of wack, but at least I know how to fix it.

It also looks like Chrome and Firefox still printed the card too large, but only by a few mm. IE on the other hand, printed it almost exactly correct. It's too small by about 1mm on each side, but I expect I won't be getting any more accurate than that.

BlackMK4
Aug 23, 2006

wat.
Megamarm
Anyone have an alternative to Offerpop that they like for Facebook pages/tabs? We use the photo contest, video contest, and general email entry pages. Our contract with Offerpop is up in February (thank loving god).

Guacamayo
Feb 2, 2012
Whats a good IDE/compiler for C in Windows? I´m learning data structures in Java but I want to go over them in C, so I can lear about pointers/memory management, etc. Thanks.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
I like Jetbrains stuff so even though I've never used it, you could take a look at CLion.

omeg
Sep 3, 2012

Or just use Visual Studio, it's free for non-commercial use and small teams.

fritz
Jul 26, 2003

Don't you have to install mingw/cygwin on windows to use clion?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
I have a dumb question about compilers. Are register files any faster / slower than L1? Wikipedia is saying that a Link Register (like in ARM / PPC) is faster than having it in main memory, but I imagine that Intel would be smart enough to store the return address or various areas around ESP in L1, because it makes all the sense in the world to.

Which makes me wonder if there's a different between L1 / a register in practice. #whoa

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
L1 access time is very close to register access time, AFAIK. There's probably a small difference (because it's bigger than the register file, and a bigger cache is inherently slower), but I'd be surprised if it's more than a cycle or maybe two.

I imagine you'd have a hell of a time trying to benchmark it on a modern x86 processor though.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Suspicious Dish posted:

I have a dumb question about compilers. Are register files any faster / slower than L1? Wikipedia is saying that a Link Register (like in ARM / PPC) is faster than having it in main memory, but I imagine that Intel would be smart enough to store the return address or various areas around ESP in L1, because it makes all the sense in the world to.

Which makes me wonder if there's a different between L1 / a register in practice. #whoa

Registers are to a first approximation about 10 times faster than a L1 cache hit. That said, modern OOO processors are very good at hiding cache hits, so as long as the load can be scheduled reasonably ahead of any data dependency, it's probably going to be indistinguishable from if everything was in registers. Especially in the case of the instruction side; essentially every processor is going to have a call/return predictor at the front end that keeps 8 or so levels of return addresses in a local register file and predicts those about as well as if they were direct jumps.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Yeah. I know there was a talk a few pages ago about emulating an architecture with more registers than x86 and I recommended using x64 to get the extra registers, but I think spilling them into ESP+x and letting L1 work its magic would work just as well.

pseudorandom name
May 6, 2007

Modern x86 renames the top few entries of the stack like any other register.

TheresaJayne
Jul 1, 2011
An old friend of mine about 25 years ago said "they should have used Rambus memory its more expensive but essentially faster"

Guess what DDR4 is Rambus memory, they finally listened to him,

I always knew he was bright. He has worked for MySQL, Sun, Google, Blizzard and LinkedIn.

Where as I am writing Code for European Space Agency after bouncing around in unfulfilling roles with Startups.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Uh, no? I can't find any evidence that DDR4 is like RDRAM in any way. RDRAM had horrible latency compared to DDR, as well.

TheresaJayne
Jul 1, 2011

Suspicious Dish posted:

Uh, no? I can't find any evidence that DDR4 is like RDRAM in any way. RDRAM had horrible latency compared to DDR, as well.

The point was that when people were using SDRAM RDRam was much better - and was used in the Nintendo 64 Btw.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Suspicious Dish posted:

Yeah. I know there was a talk a few pages ago about emulating an architecture with more registers than x86 and I recommended using x64 to get the extra registers, but I think spilling them into ESP+x and letting L1 work its magic would work just as well.

Coming back to this for a second, the problem with that is you now need to do some fairly complex (i.e. slow) emulation on anything that touches the stack pointer in order to maintain the appropriate invariants (you can't really assume much about how a program written for the emulated processor actually uses its stack) - IIRC the thread suggested the approach of using some static memory location to hold the additional registers, which would still mean that those registers will spend most of their time in L1 (at least as long as they're being actively used).

I believe there are performance counters for L1 cache misses, so you might actually be able to profile how effective each strategy is at keeping the relevant data in L1 cache.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

TheresaJayne posted:

The point was that when people were using SDRAM RDRam was much better - and was used in the Nintendo 64 Btw.

Wow, the N64. I guess that settles it, Rambus was right all along.

TheresaJayne
Jul 1, 2011

Skandranon posted:

Wow, the N64. I guess that settles it, Rambus was right all along.

what would i know anyway, I am just a woman...

baquerd
Jul 2, 2007

by FactsAreUseless

TheresaJayne posted:

what would i know anyway, I am just a woman...

Fortunately we're gender-blind in here! You got caught saying something dumb, it happens to everyone, just move on.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Suspicious Dish posted:

I have a dumb question about compilers. Are register files any faster / slower than L1? Wikipedia is saying that a Link Register (like in ARM / PPC) is faster than having it in main memory, but I imagine that Intel would be smart enough to store the return address or various areas around ESP in L1, because it makes all the sense in the world to.

Which makes me wonder if there's a different between L1 / a register in practice. #whoa

The actual storage location of the return address doesn't affect throughput at all because the return predictor has its own extremely reliable notion of where that branch is going, so nothing about it holds up speculation. As long as validating the prediction is fast enough that it doesn't hold up the entire instruction window, it's fine, and the only overhead of not having a link register is that there is a store to memory that has to be performed eventually (which of course means additional µ-ops and barriers to memory reordering).

That's specific to return addresses, though. The general answer, as SD said, is that the register file is a lot faster than L1. Furthermore, an operand buffer is even faster than the permanent register file, and an out-of-order CPU will rename registers so that producing a value in a register and then using it will make the second instruction's reservation station directly use the first instruction's result, dynamically reconstructing the exact computation dependency graph; I don't believe there are any CPUs that do that kind of renaming/forwarding through memory. However, in many circumstances you're going to be doing enough other things (e.g. loading other values) that will allow the CPU to hide most of the cost of those memory accesses.

Like I said before, though, if you really want maximal performance for this emulation you will want to do real register allocation.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

pseudorandom name posted:

Modern x86 renames the top few entries of the stack like any other register.

Do you have details for this?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Ah, yeah, that makes more sense to me. Would it be valid to not actually ever write the address to physical memory? Because I doubt anything on the rest of your bus is actually going to look at the return address or CPU stack.

Also, a lot of this stuff isn't very well documented (it's the part that makes the computer go fast, not the part that's behavior you should write to). Do you know of any documentation that talks about what modern predictors would do? Or does it mostly tend to be through-the-grape-vine, told-at-holiday-parties kind of material?

JawnV6
Jul 4, 2004

So hot ...
I don't really see how that could be the case. Register renaming works because there's a lack of side effects. Almost every memory tx has to do a lot of bookkeeping, even just signaling out to the other L1 that a write happened or checking the accessed bit in the PTE? Maybe this is one of those times where every stack ever points to memory with NX, cacheable, Exclusive lines and the myriad of other options for memory visibility isn't relevant here. It certainly wouldn't be exclusive to the stack, any L1 dcache entry would look the same.

Renaming is huge though. Given the runway there, I'd be surprised if physical registers was actually a bottleneck. Logical registers, execution ports, sure. But I'm not even sure the relevant part of the OoO is capable of putting back pressure in a physical-full situation.

JawnV6
Jul 4, 2004

So hot ...

Suspicious Dish posted:

Also, a lot of this stuff isn't very well documented (it's the part that makes the computer go fast, not the part that's behavior you should write to). Do you know of any documentation that talks about what modern predictors would do? Or does it mostly tend to be through-the-grape-vine, told-at-holiday-parties kind of material?
This might be through the grape vine, but I'm pretty sure page 190 of this pdf is the kind of thing you're looking for. It's talking about a predictor structure for call stacks and the many many ways it can get confused about the results of other predictions. But I sorta knew the magic words.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

rjmccall posted:

Do you have details for this?

I know intimate details about one implementation wherein the decoder treats memory addresses of the form esp+k for small k as a register name, which is renamed as you'd expect. There's a spare store-combining buffer kept around that is used to commit writes to those registers to the L1 at retirement, and some logic to detect unexpected clobbers to that range (which typically happen by either writing to one of those addresses via another form of addressing, or because a different thread stole the cache line). It's... messy, and I can only share details because in practice it turned out to be a bad idea.

Suspicious Dish posted:

Ah, yeah, that makes more sense to me. Would it be valid to not actually ever write the address to physical memory? Because I doubt anything on the rest of your bus is actually going to look at the return address or CPU stack.

This depends on what you mean by "physical memory". You must have started a commit to the L1 by retirement, because otherwise you don't have any hope of maintaining coherence in the unlikely case that another thread does access those addresses. That said, highly-volatile thread-local stuff like stacks will tend to stay resident on a L1 or L2, and is reasonably likely to avoid being propagated out to DRAM.

Suspicious Dish posted:

Also, a lot of this stuff isn't very well documented (it's the part that makes the computer go fast, not the part that's behavior you should write to). Do you know of any documentation that talks about what modern predictors would do? Or does it mostly tend to be through-the-grape-vine, told-at-holiday-parties kind of material?

You may enjoy reading Agner Fog's, which he reverse engineered using direct timing measurements and his own understanding of processor design. Obviously, I couldn't possibly comment about how closely he managed to infer the design of any particular microarchitecture.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

JawnV6 posted:

This might be through the grape vine, but I'm pretty sure page 190 of this pdf is the kind of thing you're looking for. It's talking about a predictor structure for call stacks and the many many ways it can get confused about the results of other predictions. But I sorta knew the magic words.

why do the page numbers start at 157

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?

Suspicious Dish posted:

why do the page numbers start at 157

The earlier pages are in the earlier issues of the same volume of the journal. This seems to be typical in academic journals.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

ShoulderDaemon posted:

I know intimate details about one implementation wherein the decoder treats memory addresses of the form esp+k for small k as a register name, which is renamed as you'd expect. There's a spare store-combining buffer kept around that is used to commit writes to those registers to the L1 at retirement, and some logic to detect unexpected clobbers to that range (which typically happen by either writing to one of those addresses via another form of addressing, or because a different thread stole the cache line). It's... messy, and I can only share details because in practice it turned out to be a bad idea.

Okay. That would've been my guess in the abstract: good in theory, but in practice adds so much complexity that it doesn't pay off. But I've been surprised by hardware before.

Intel does use a special unit for pushes and pops, but that's to reduce micro-ops during extended push/pop sequences (common in function prologues and epilogues), and it's just to handle the changes to SP, not the memory effects.

Adbot
ADBOT LOVES YOU

ButtWolf
Dec 30, 2004

by Jeffrey of YOSPOS
I didnt really know where to put this, but I think it's alright. Involving webscraping: i need to get a bunch of data (2 lines worth from 400 pages) from Basketball Reference.

I think it's kind of a violation of tos, but its not for anything I'll make money off of or anything bad and it shouldn't be that heavy on servers. Will they even notice? What are the ramifications of this anyway?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply