Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Jabor
Jul 16, 2010

#1 Loser at SpaceChem
have you considered not giving colossal fuckups the permissions necessary to colossally gently caress things up

Adbot
ADBOT LOVES YOU

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
basically if you have a semicolon at the end, the "last statement" that gets returned is the empty bit between that semicolon and the end of the block

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

quote:

c# style events; why on earth do i have to check for null every time i want to fire one?

I honestly still get angry at how much they hosed this up. It's just so incomprehensibly bad. It almost feels like no-one even attempted to write code using events before they finalized the design, because that whole issue (and the trivially easy fix) is literally the first thing that leaps out at you when you try to write one.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Share Bear posted:

does that help? I'm trying to figure out what the curly brackets mean or imply

basically you have a set (indicated by the {} brackets)

each element of the set is a pair, containing the xi vector and the corresponding data point yi.

it doesn't actually have any deep meaning or anything, you should probably read forward until they start actually doing something with that data. once you know what they're trying to achieve, the representations chosen will make more sense

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

gonadic io posted:

yuuuuuuuuup

hmm, you seem to be trying to render the year multiple times. and what is up with that padding specifier?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Luigi Thirty posted:

escape velocity
escape velocity override

marathon

uhhhhh

escape velocity had a cool modding scene though because everything editable was implemented as a Resource

this was good fun when you wanted to use mods with the windows version, because you'd unzip them and they'd be zero bytes because the file system didn't actually support resource forks

you had to use a special tool that you found on a sketchy forum somewhere to extract it into the format that the windows version expected

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
just make it a keyword that stops being a keyword if there is some other thing named "yield" in scope

you already need to know what names are in scope to parse the language correctly, so it's not a huge burden

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

JewKiller 3000 posted:

please tell me you don't design languages

if your language is already definitely parseable then it's a horrible idea yeah

but you already need to know what names are in scope and what they represent in order to figure out what
code:

butt * hole;
means, so for c++ specifically it's not so bad

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
no one will ever want to sort more than 32 megs of data

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

redleader posted:

what is the best proportional font for coding use? preferably available on windows by default thx

Verdana is fine, and available pretty much everywhere.

I really like the Go font though. (Stay away from the monospace one imo, but the proportional one is good)

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Luigi Thirty posted:

all right let's see if we can figure out how to read these wacky atari analog joyst--oh dammit



:mad:

well i've sussed out how to read if the stick is pressed up so far

As far as I can tell:

- You read one of the addresses that are mapped to an analog-to-digital channel. Presumably the one corresponding to the axis you want to read...
- Ignore the value because it's probably garbage
- You get an interrupt some time later, once the ADC has actually done the conversion.
- To get the real value you ___________ the ______ which _____

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

The_Franz posted:

i thought they had a giant perforce repo at one point?

yup, it's even mentioned in the article. the whole custom-monorepo thing is essentially because it got too large for perforce.

if you're not a globe-spanning megacorp you can probably run a monorepo just fine on existing commercial software. i'd even venture that investing money into making that happen is probably more cost-effective than trying to make any sort of fine-grained split repo work.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

cinci zoo sniper posted:

the issue i have with my way of testing is that it is not, uhh, automated enough. i sit down with a function, figure out math for creating an input, then i create it and feed it in. how my imagination perceived testing is that i have a thing that comes with a bunch of things on its own, feeds them all, and looks for discrepancies - the result there, as far as i get it atm, is that i basically use my own function, directly or written for the second time, to gauge its accuracy, which seems to defeat the purpose of testing. function itself wouldn't know it's broken, would it?

i probably just dont understand unit testing correctly. ive never bothered to seriously read about it, and my internal difference between "unit testing" and "testing" is that in the former i test piece by piece, not a vs z

most people don't understand unit testing, so it's not like you're alone in that

one big complication is that there are two fundamentally different types of testing, but people just lump them all in as a "unit test" without clarifying which one they mean.

- regression testing is all about making it possible to change code, without inadvertently breaking functionality. a good regression test starts failing if the code changes in such a way that something is now broken ("has regressed"), while continuing to pass if the change doesn't affect the functionality tested by that particular test.
- specification testing is all about ensuring that the code being written matches some form of prior specification as to what it should be doing. often you can use the same bank of tests to test multiple different implementations that claim to meet the specification.

for regression testing, it's totally reasonable to run the code, see what value it's actually outputting in a relevant scenario, and just writing a test to ensure that it keeps outputting that same value in that particular scenario. if you've chosen a bad scenario your test might end being a bad test (in that it fails all the time even when the changes don't actually break anything), but it will still be a valid regression test.

on the other hand, if you're doing specification testing, that's completely backwards and you'd be right to be suspicious of it. what you're supposed to do to write specification tests is to create a scenario, look at the specification to see what the output should be, and write the test to make sure that's the case. this is a place where the test-driven-development model of writing your tests before you even start writing the actual code can be useful.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Shaggar posted:

java enums are real good cause they can have additional fields and methods and other stuff and are effectively just static classes with a list of static instances that the compiler understands.

Java enums are really good

C# gave all that up for ... the ability to use them as bitflags, lol (a problem that Java solved with EnumSet)

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
different namespaces for functions vs. other objects?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Maluco Marinero posted:

you're telling us you were never a bad programmer? you're in the wrong thread friend.

Or maybe he's dunning-krugered his way into the right one

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

MALE SHOEGAZE posted:

so, i'm working on speeding up my memcached clone and I'm currently messing with the way access to the backing store is synchronized. I'm currently testing two different methods:

1) There's a mutex around the backing LRU cache (both reads and writes need the mutex, because the LRU cache uses a linked hash map and getting items rearranges the data). Each request needs to wait for the lock. This takes about 25 micros per request.

2) I've got a single worker with unsynchronized access to the store, running in a loop. When a request comes in, I push the work onto a dequeue. The worker spins until work gets added to the queue, it pops the work off the queue, does the work (adding/removing/getting items from the cache), and then sends a response via a channel. Despite all of the ceremony, this method seems to take about 11-13 micros per request. It's faster!

I guess my question is: what's the best way to handle this (assuming i'm building a memcached-like key value store, with performance being the emphasis)? Memcached uses slabs and I'm still reading up on understanding how slabs would be used for caching.

also sorry this question is stupid, i smoked too much before posting it

3) use a concurrent cache implementation that has a better locking strategy than "lock the whole cache for every read"

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

cis autodrag posted:

what if i want to crash immediately on a null without having to explicitly null check everywhere?

practically every option type implementation has a "get if exists, crash otherwise" if that's what you really really want to do

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Soricidus posted:

not all of it, no

so by default it will allow some reflective access of certain parts of the standard library now. that's all. and it dumps pages of warning messages to stderr telling users that the program is broken.

they have still pressed on with modularisation, and some modules are no longer available by default that used to be, so learn to love those command line flags!

and they have still (at least within javafx) happily made sweeping changes to things everyone was using even though they were technically private. some of these changes have been to make those things public -- but since this involved changing their names or packages, there is basically literally no way to have a non-trivial javafx program that runs on java 8 and java 9 without maintaining two versions of the code

if the problem is you're doing reflective access to technically-private things that got moved around in java 9... can't you just fall back to looking up the java 8 way if your code doesn't find the java 9 version?

put that logic inside a convenience wrapper and that sounds (at least to me) like it makes everything work without maintaining two versions of the code

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Conceptually, understanding what makes something monadic and how you'd create one is really useful for understanding.

But in practice yeah, most things you typically come across that make useful monads are fairly common, and someone else has already done the implementation work for you. Either that or, even though it's monadic, you don't actually need it to be a monad for your specific use case.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
It might be conceptually easier to treat it as an exercise in concurrent reference counting and just use a counter to detect missed updates.

For example, have a single shared location, which has a pointer to a buffer object.
Each buffer contains:
A reference count
A counter
The actual data

The thing populating these buffers does the following:
Sets up the buffer with a reference count of 1, and the counter as 1 + whatever the previous buffer was.
Reads the pointer to the previous buffer, and writes the new one. There's only a single writer here, so this only needs to be atomic enough to avoid split reads and ensure that other threads will see everything written into the buffer.
- If the previous buffer has a reference count of one, compare-and-set it down to zero. Then atomically add it to a common pool of buffers to reuse.
- If the reference count is greater than one, compare-and-set it down by one, then forget about it.
Pull a buffer from the common pool for the next wait.

The things consuming the buffers do the following:
Wait until the shared buffer pointer changes.
Read the shared pointer to the current buffer.
Read the reference count - if it's zero, they lost a race, so should go back and re-read the shared pointer.
Atomically set the reference count to the previously-read reference count + 1 - if it fails, and the reference count is now zero, they again lost a race and should go back and re-read the shared pointer. If it's not zero they should just try to increment it again until it succeeds.
Release the previous buffer using the same process as above (adding it to the pool if they were the last reference).
Process the buffer. They can use the counter to check if they missed any data (if it's previous counter value + 2 or more, they missed something).

Advantages are no explicit bookkeeping for things that read the buffer - they can show up and start reading whenever they like, and going away again simply requires releasing their reference to the previous buffer. You can also choose to dynamically size your pool of buffers (though you still have a fixed upper bound and can preallocate one per thing reading if you want).

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

rjmccall posted:

i was considering this but i couldn't quite convince myself that the use-after-"free" wasn't an insurmountable problem. normally that kind of approach is a non-starter because the memory actually is freed, which invalidates racing consumers' attempts to check the refcount. here the buffer isn't actually freed but by returning it to the buffer pool i'm not sure you can't see similar effects. at the very least, the reader must be aware when preparing a new buffer that there might be consumers with a stale handle to this buffer that just haven't yet checked its refcount, so the act of setting the refcount to 1 actually publishes the buffer even before it is written to the shared reference. i think that might end up being ok as long as you make the refcount checks ordered, but it would be very easy to disturb in ways that will badly break it, and you must literally never free a buffer

yeah, in the extreme case the buffer could be recycled all the way through the pool and be repopulated with new data in between getting the reference and incrementing the refcount. not actually a problem in this specific scenario though, since it's not actually any worse than if the consumer goes into a deep sleep immediately after incrementing the refcount instead of immediately before.

not being able to free buffers sucks. if you add another layer of indirection (or if your heap supports in-place reallocs) you can still free the actual data buffer and just keep the metadata though.

i wonder if you could fix the stale pointer issue (and be able to free buffers) if you had an atomic primitive big enough for the buffer pointer plus a count of consumers? each consumer increments the refcount in the shared ref, and to release the buffer it increments a field for "released references" in the buffer metadata. when the shared reference is replaced, the reader atomically subtracts the consumer count from the released references, and the buffer is disposed of (returned to the pool, freed, whatever) when it rises from -1 to 0. essentially the bigger atomic primitive guarantees each consumer is in one of two states, either it has the pointer and has incremented the refcount, or it doesn't have the pointer and has not incremented the refcount, no divergences.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
good luck implementing exactly-once semantics in any meaningful way though

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

JawnV6 posted:

these were fun little games, but the ‘boss fight’ at the end is a bear. everything else was pretty straightforward, get one thread in the right place and cycle the aggressor, but that last one is the kind of failure you’d catch with batches of tests not inspection

actually, you'd catch it by inspection by noting that the critical section isn't actually guarded by the monitor. who gives a poo poo if you can't figure out the exact sequence that breaks it immediately when the underlying flaw is obvious?

if you're relying on tests to catch a nondeterministic failure you're going to have a bad time at some point

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
i quite like protocol buffers

because they're basically impossible to create by hand, so there's always a definition for them somewhere that outlines the boundaries of the api. it's not always a comprehensive description of what is actually expected and returned in various cases, but it is at least a workable baseline.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

FamDav posted:

Collections.singleton(T t) will infer T based on the return value of the method hth

edit: actually i'm fairly confident that java inference was improved in jdk8 to the point where it would infer it based on the parameter to singleton itself. i know they did a ton of work to improve type inference to make lambdas not a huge shitshow

the point at issue is that the type of the collection is deliberately different from the parameter type (presumably to conform to some imperfectly-defined interface)

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Zemyla posted:

So they had to have the width be six bits wide, and they felt they wanted to be able to express widths up to 57344 in it, but to be unable to have a width of 9?

it sounds more like the floating-point multiplier was easier to implement in the hardware (just a shifter and a three-way adder) than a full-on integer multiplier would have been.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Sapozhnik posted:

Hmm? Explain, I'm curious. Always assumed FP alus were horribly complex beasts

fake edit: what happens if you feed it a stray cat denormal or other such weird input

first of all, you'll notice that it's a pretty constrained format. we're not talking ieee 754 here.

computers essentially multiply via long multiplication - to work out A * B, you work out A * 2, A * 4, A * 8 etc., and add together the ones that correspond to set bits in B.

so the size of your multiplier is based on how large B could be (which dictates how many bits you might need to add together). if you wanted to support bitmaps up to 511x511, you'd need 9 bits, which means your multiplier would need to add together 9 different numbers. that's a huge amount of circuitry required! (the other option is to do the adding up over multiple cycles, re-using the same bits of circuitry, which is also complicated and still pretty big.)

with this floating-point scheme, your multiplier only needs to add three different numbers, which is way smaller. then you just need a shifter on the front to apply the exponent, which is also pretty small.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
they don't fit neatly together if you're using them as a clustering key

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
essentially if you straight multiply them together you get a 32.32 fixed-point number. so you can right-shift it to make it 48.16 (with the exact same value), then cut off the high bits to 16.16.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Are you sign-extending the negative number when you convert it to 64 bits for the multiplication?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
dunno about python specifically, but in a lot of languages/runtimes the exception-based version is slightly faster in the normal case (and pays for it by being horrendously slower if the exception is actually thrown).

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
So your multiply has 16 bit inputs and a 32 bit output, yes?

Essentially there are four components you add up to make that into a 32 in/64 out multiply:

code:
A.h * B.h
     A.h * B.l
     A.l * B.h
          A.l * B.l

If your numbers are in 16.16 form, you want the middle 32 bits - i.e. A.h*B.l + A.l*B.h + (A.h*B.h << 16) + (A.l*B.l >> 16).

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Luigi Thirty posted:

if I multiply -2 * 2, the result is correct ($FFFE0000)

umm

(fuller response in a bit, this just leapt out at me)

e: actually I gotta ask, how exactly is stuff like -0.5 represented? Is there a "negative zero" representation in the integer part?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
so here's what should be happening when you multiply -2.5 x 1:

code:
0xFFFD8000 * 0x00010000
(0xFFFD * 0x0001) << 16 = 0xFFFD0000
 0x8000 * 0x0001        = 0x00008000
 0xFFFD * 0x0000        = 0x00000000
(0x8000 * 0x0000) >> 16 = 0x00000000
for a total of 0xFFFD8000 (as expected). try printing the intermediate values?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
"my workplace bans the language facilities that make it easy to do X; why does this language make it so hard to do X?"

Adbot
ADBOT LOVES YOU

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
rip akadajet's post, lost forever

  • Locked thread