Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
aardvaard
Mar 4, 2013

you belong in the bog of eternal stench


so a monad is... mapreduce?

Adbot
ADBOT LOVES YOU

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

i find i write monoid instances more often but idk

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder
i dont really know or care what a monad is beyond things that are really useful when you define map and flat map on them.

VikingofRock
Aug 24, 2008




CommunistPancake posted:

so a monad is... mapreduce?

Sort of! I'd say it's more that things which can be flatmapped are monadic.

Hunter2 Thompson
Feb 3, 2005

Ramrod XTreme
I'm the terrible programmer who has managed to avoid writing anything concurrent for the last four years out of fear and ignorance but finally need to learn how to do it, kind of.

Maybe you guys can give me some tips for my situation, I think it might be simple. I'm using C++11.

Right now I have this:

reader_thing is waiting for data most of the time from a hardware peripheral (reader_thing is sleeping on a select() or something underneath). When data is available reader_thing unblocks and is given a pointer to that data. After doing some things, reader_thing loops and waits for data again.

What I want to add is the following:

consumer_thing is a thing that waits around until reader_thing has its pointer to new data. After consumer_thing is unblocked it can also do things with that data from reader_thing (read only). Eventually a consumer_thing loops and blocks again waiting for reader_thing to get new data. Hopefully consumer_thing didn't take too much time doing things and miss some data from reader_thing!

There will be several consumer_things looking at this read-only data from reader_thing at the same time

It seems to me like a std::condition_variable (and a mutex???), std::condition_variable::wait(), and std::condition_variable::notify_all() are one way to do this but I know pretty much nothing and am unsure.

he;lp? I appreciate it!

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

VikingofRock posted:

Thank you both very much for the feedback. Yeah, I thought my design was a little lock heavy, but my thinking was that each lock is only going to be held for the length of a lookup so it's not too bad. I think rjmccall your design with the std::optional is better though (although I am stuck on C++14 so I'll be using boost::optional). I'll give that a shot tomorrow.

you're basically killing parallelism here by acquiring the lock in the first place. if your function really is far more expensive than acquiring a lock, and you really are likely to have multiple concurrent readers in the early phase when you're still evaluating the function a lot instead of returning previously computed results, then temporarily releasing the lock does re-admit some parallelism during that early phase. on the other hand, if you really do have this much concurrency, you really should be looking at using a concurrent map instead, i.e. something designed to allow look-ups without locking, and then you can use something like call_once to safely concurrently initialize the value

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

meatpotato posted:

I'm the terrible programmer who has managed to avoid writing anything concurrent for the last four years out of fear and ignorance but finally need to learn how to do it, kind of.

Maybe you guys can give me some tips for my situation, I think it might be simple. I'm using C++11.

Right now I have this:

reader_thing is waiting for data most of the time from a hardware peripheral (reader_thing is sleeping on a select() or something underneath). When data is available reader_thing unblocks and is given a pointer to that data. After doing some things, reader_thing loops and waits for data again.

What I want to add is the following:

consumer_thing is a thing that waits around until reader_thing has its pointer to new data. After consumer_thing is unblocked it can also do things with that data from reader_thing (read only). Eventually a consumer_thing loops and blocks again waiting for reader_thing to get new data. Hopefully consumer_thing didn't take too much time doing things and miss some data from reader_thing!

so, this is actually really important to the design. we can assume that it's undesirable for a consumer to miss some data. is it unacceptable? if it is acceptable, does the consumer at least need to be told that it's missed something?

also, how important is it to avoid copies? what about allocating memory?

Hunter2 Thompson
Feb 3, 2005

Ramrod XTreme

rjmccall posted:

so, this is actually really important to the design. we can assume that it's undesirable for a consumer to miss some data. is it unacceptable? if it is acceptable, does the consumer at least need to be told that it's missed something?

also, how important is it to avoid copies? what about allocating memory?

Yes, it's acceptable to miss some data but the consumer should somehow know if it missed data.

I'm honestly not sure how important it is to avoid copies, this code is running on a ~500 MHz MIPS SoC with 64 MB RAM, think budget wifi router level memory and speed. The data is coming in at 256 kbps 384 kbps depending on setup. I don't think copying the data to give it to just one or two consumers would cause problems. It might just work to give each consumer a copy if that's the case, but I anticipate more than a few consumers, maybe.

Edit: I should mention the reader_thing ends up with a regular old std::vector<int32_t> of data, not a raw pointer into some DMA area or something like I described before. This detail is probably important because I can pass around a shared pointer to my consumers, right?

Hunter2 Thompson fucked around with this message at 08:02 on Aug 7, 2017

Luigi Thirty
Apr 30, 2006

Emergency confection port.

well that's odd. an Apple II+ is just a 48K Apple II with Applesoft BASIC in ROM instead of Integer BASIC. but for some reason the Applesoft ROM set crashes trying to boot from disk in my emulator where the Integer ROM set works fine...?

Workaday Wizard
Oct 23, 2009

by Pragmatica

meatpotato posted:

I'm the terrible programmer who has managed to avoid writing anything concurrent for the last four years out of fear and ignorance but finally need to learn how to do it, kind of.
...

use rust op. FEARLESS CONCURRENCY :rice::pcgaming::jp:

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
ok. i think you can do this non-blocking

the state for each consumer is: a consumer-owned buffer pointer C, a reader-owned buffer pointer R, whether there's new data N, and whether the consumer's missed any data M. C is private to the consumer, while R,N, and M are shared the reader and must be mutually atomic, which you can do easily with a single-word atomic by aligning the buffer to 4 bytes and using the low bits for the flags. the general rule is that C != R and that the consumer can only safely access the data from C. the consumer polls by reading (R,N,M) atomically. if N is false, there's no new data; wait on the condition variable. otherwise the consumer swaps in (C,false,false); if that fails, they start over with the fresh values of (R,N,M), otherwise they set C to R and read the new buffer. M will be true if they missed something

every buffer has a "possible reader" count. each consumer must contribute a buffer. when a consumer is registered, the buffer's count is initialized to 1; C is set to this buffer, R is set to the latest data buffer (whose count is incremented), and N and M are set to false. to be usable by the reader, a buffer's count must be 0.

when the reader has read data into a new data buffer D, D's count is initialized to the current number of consumers. then the reader goes to each consumer, reads (R,N,M), swaps in (D,true,N) (repeating with fresh values on failure), and decrements the count on R; if the count is zero, R becomes available for use in subsequent reads. when the reader has updated all the consumers, it broadcasts on the condition variable

the reader must contribute three buffers in order to ensure that there is always a buffer to read data into: essentially, the last buffer read (handed to the consumers), the current buffer being read (handed to the driver), and the next buffer to give to the driver whe claiming the current buffer. i think this can be reduced to two if necessary

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

meatpotato posted:

I'm honestly not sure how important it is to avoid copies, this code is running on a ~500 MHz MIPS SoC with 64 MB RAM, think budget wifi router level memory and speed

or 1997 $10K workstation level of memory and speed

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
It might be conceptually easier to treat it as an exercise in concurrent reference counting and just use a counter to detect missed updates.

For example, have a single shared location, which has a pointer to a buffer object.
Each buffer contains:
A reference count
A counter
The actual data

The thing populating these buffers does the following:
Sets up the buffer with a reference count of 1, and the counter as 1 + whatever the previous buffer was.
Reads the pointer to the previous buffer, and writes the new one. There's only a single writer here, so this only needs to be atomic enough to avoid split reads and ensure that other threads will see everything written into the buffer.
- If the previous buffer has a reference count of one, compare-and-set it down to zero. Then atomically add it to a common pool of buffers to reuse.
- If the reference count is greater than one, compare-and-set it down by one, then forget about it.
Pull a buffer from the common pool for the next wait.

The things consuming the buffers do the following:
Wait until the shared buffer pointer changes.
Read the shared pointer to the current buffer.
Read the reference count - if it's zero, they lost a race, so should go back and re-read the shared pointer.
Atomically set the reference count to the previously-read reference count + 1 - if it fails, and the reference count is now zero, they again lost a race and should go back and re-read the shared pointer. If it's not zero they should just try to increment it again until it succeeds.
Release the previous buffer using the same process as above (adding it to the pool if they were the last reference).
Process the buffer. They can use the counter to check if they missed any data (if it's previous counter value + 2 or more, they missed something).

Advantages are no explicit bookkeeping for things that read the buffer - they can show up and start reading whenever they like, and going away again simply requires releasing their reference to the previous buffer. You can also choose to dynamically size your pool of buffers (though you still have a fixed upper bound and can preallocate one per thing reading if you want).

cinci zoo sniper
Mar 15, 2013




jetbrains officially support rust now

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Jabor posted:

It might be conceptually easier to treat it as an exercise in concurrent reference counting and just use a counter to detect missed updates.

i was considering this but i couldn't quite convince myself that the use-after-"free" wasn't an insurmountable problem. normally that kind of approach is a non-starter because the memory actually is freed, which invalidates racing consumers' attempts to check the refcount. here the buffer isn't actually freed but by returning it to the buffer pool i'm not sure you can't see similar effects. at the very least, the reader must be aware when preparing a new buffer that there might be consumers with a stale handle to this buffer that just haven't yet checked its refcount, so the act of setting the refcount to 1 actually publishes the buffer even before it is written to the shared reference. i think that might end up being ok as long as you make the refcount checks ordered, but it would be very easy to disturb in ways that will badly break it, and you must literally never free a buffer

Fergus Mac Roich
Nov 5, 2008

Soiled Meat

cinci zoo sniper posted:

jetbrains officially support rust now

Link

It's official support for the intellij plugin. I was hoping you meant they put out an IDE with debugger support or something but this is still good news

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder
I've been using the intellij rust plugin for about a month and I can confirm that it's good. definitely more powerful than the current RLS setups.

only weird thing is that it's kinda fucky about text in a way that can only be experienced. i think it might be a conflict with intellij vim but I'm not turning off vim mode to find out

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

rjmccall posted:

i was considering this but i couldn't quite convince myself that the use-after-"free" wasn't an insurmountable problem. normally that kind of approach is a non-starter because the memory actually is freed, which invalidates racing consumers' attempts to check the refcount. here the buffer isn't actually freed but by returning it to the buffer pool i'm not sure you can't see similar effects. at the very least, the reader must be aware when preparing a new buffer that there might be consumers with a stale handle to this buffer that just haven't yet checked its refcount, so the act of setting the refcount to 1 actually publishes the buffer even before it is written to the shared reference. i think that might end up being ok as long as you make the refcount checks ordered, but it would be very easy to disturb in ways that will badly break it, and you must literally never free a buffer

yeah, in the extreme case the buffer could be recycled all the way through the pool and be repopulated with new data in between getting the reference and incrementing the refcount. not actually a problem in this specific scenario though, since it's not actually any worse than if the consumer goes into a deep sleep immediately after incrementing the refcount instead of immediately before.

not being able to free buffers sucks. if you add another layer of indirection (or if your heap supports in-place reallocs) you can still free the actual data buffer and just keep the metadata though.

i wonder if you could fix the stale pointer issue (and be able to free buffers) if you had an atomic primitive big enough for the buffer pointer plus a count of consumers? each consumer increments the refcount in the shared ref, and to release the buffer it increments a field for "released references" in the buffer metadata. when the shared reference is replaced, the reader atomically subtracts the consumer count from the released references, and the buffer is disposed of (returned to the pool, freed, whatever) when it rises from -1 to 0. essentially the bigger atomic primitive guarantees each consumer is in one of two states, either it has the pointer and has incremented the refcount, or it doesn't have the pointer and has not incremented the refcount, no divergences.

FlapYoJacks
Feb 12, 2009

MALE SHOEGAZE posted:

I've been using the intellij rust plugin for about a month and I can confirm that it's good. definitely more powerful than the current RLS setups.

only weird thing is that it's kinda fucky about text in a way that can only be experienced. i think it might be a conflict with intellij vim but I'm not turning off vim mode to find out

How about you stop using a text editor from the 70's?

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Jabor posted:

yeah, in the extreme case the buffer could be recycled all the way through the pool and be repopulated with new data in between getting the reference and incrementing the refcount. not actually a problem in this specific scenario though, since it's not actually any worse than if the consumer goes into a deep sleep immediately after incrementing the refcount instead of immediately before.

not being able to free buffers sucks. if you add another layer of indirection (or if your heap supports in-place reallocs) you can still free the actual data buffer and just keep the metadata though.

i wonder if you could fix the stale pointer issue (and be able to free buffers) if you had an atomic primitive big enough for the buffer pointer plus a count of consumers? each consumer increments the refcount in the shared ref, and to release the buffer it increments a field for "released references" in the buffer metadata. when the shared reference is replaced, the reader atomically subtracts the consumer count from the released references, and the buffer is disposed of (returned to the pool, freed, whatever) when it rises from -1 to 0. essentially the bigger atomic primitive guarantees each consumer is in one of two states, either it has the pointer and has incremented the refcount, or it doesn't have the pointer and has not incremented the refcount, no divergences.

yeah you can basically do a reader/writer spin lock

Hunter2 Thompson
Feb 3, 2005

Ramrod XTreme
jesus christ concurrency is truly awful

thanks to both of you for the ideas, though I still don't understand either well enough to write it...

I'll keep reading your posts over and over until it clicks

maybe you can tell me what's bad about this, which is what I wrote before you replied:

code:
#include <chrono>
#include <iostream>
#include <thread>
#include <vector>

struct Buffer {
    std::shared_ptr<std::vector<int32_t>> data;

    std::mutex lock;
    std::condition_variable new_data;

    void update(std::shared_ptr<std::vector<int32_t>> samples){
        std::unique_lock<std::mutex> l(lock);
        data = samples;
        new_data.notify_all();
    }

    std::shared_ptr<std::vector<int32_t>> fetch(){
        std::unique_lock<std::mutex> l(lock);
        new_data.wait(l);
        return data;
    }
};

std::vector<int32_t> get_data()
{
    static int32_t iteration = 0;
    return std::vector<int32_t>(2048, iteration++);
}

void prod(Buffer& buf)
{
    while (1) {
        std::this_thread::sleep_for(std::chrono::milliseconds(250));
        std::cout << "Producing data\n";
        auto data = std::make_shared<std::vector<int32_t>>(get_data());
        buf.update(data);
    }
}

void consume(int id, Buffer& buf)
{
    while (1) {
        auto data = buf.fetch();
        std::this_thread::sleep_for(std::chrono::milliseconds(10));
        std::cout << "Consumer #" << id << " read data" << '\n';
        std::cout << "data[0] = " << data->at(0) << '\n';
    }
}

int main(int argc, char* argv[])
{
    std::cout << "Hello, world!\n";

    Buffer buf;

    std::thread producer(prod, std::ref(buf));
    std::thread c1(consume, 1, std::ref(buf));
    std::thread c2(consume, 2, std::ref(buf));
    std::thread c3(consume, 3, std::ref(buf));
    std::thread c4(consume, 4, std::ref(buf));

    c4.join();
    c3.join();
    c2.join();
    c1.join();
    producer.join();

    return 0;
}
I ripped a lot of this off some blog, not sure if I actually need to use std::ref or if I'm using mutexes, locks or condition variables correctly. It seems to work but consumers don't know if they missed anything. Also, it eats more CPU than I expect from a glance at 'top' meaning I'm probably doing something wrong.

Like I said, I wrote this before you replied... I'm not trying to be the goon in the well replacing your advice with my own lovely plans

Thanks again for the help!

Hunter2 Thompson fucked around with this message at 19:42 on Aug 7, 2017

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

meatpotato posted:

jesus christ concurrency is truly awful

thanks to both of you for the ideas, though I still don't understand either well enough to write it...

I'll keep reading your posts over and over until it clicks

maybe you can tell me what's bad about this, which is what I wrote before you replied:

code:

#include <chrono>
#include <iostream>
#include <thread>
#include <vector>

struct Buffer {
    std::shared_ptr<std::vector<int32_t>> data;

    std::mutex lock;
    std::condition_variable new_data;

    void update(std::shared_ptr<std::vector<int32_t>> samples){
        std::unique_lock<std::mutex> l(lock);
        data = samples;
        new_data.notify_all();
    }

    std::shared_ptr<std::vector<int32_t>> fetch(){
        std::unique_lock<std::mutex> l(lock);
        new_data.wait(l);
        return data;
    }
};

std::vector<int32_t> get_data()
{
    static int32_t iteration = 0;
    return std::vector<int32_t>(2048, iteration++);
}

void prod(Buffer& buf)
{
    while (1) {
        std::this_thread::sleep_for(std::chrono::milliseconds(250));
        std::cout << "Producing data\n";
        auto data = std::make_shared<std::vector<int32_t>>(get_data());
        buf.update(data);
    }
}

void consume(int id, Buffer& buf)
{
    while (1) {
        auto data = buf.fetch();
        std::this_thread::sleep_for(std::chrono::milliseconds(10));
        std::cout << "Consumer #" << id << " read data" << '\n';
        std::cout << "data[0] = " << data->at(0) << '\n';
    }
}

int main(int argc, char* argv[])
{
    std::cout << "Hello, world!\n";

    Buffer buf;

    std::thread producer(prod, std::ref(buf));
    std::thread c1(consume, 1, std::ref(buf));
    std::thread c2(consume, 2, std::ref(buf));
    std::thread c3(consume, 3, std::ref(buf));
    std::thread c4(consume, 4, std::ref(buf));

    c4.join();
    c3.join();
    c2.join();
    c1.join();
    producer.join();

    return 0;
}

I ripped a lot of this off some blog, not sure if I actually need to use std::ref or if I'm using mutexes, locks or condition variables correctly. It seems to work but consumers don't know if they missed anything. Also, it eats more CPU than I expect from a glance at 'top' meaning I'm probably doing something wrong.

Like I said, I wrote this before you replied... I'm not trying to be the goon in the well replacing your advice with my own lovely plans

Thanks again for the help!

join order for the threads shouldn't need to be in reverse start order. also they'll never join because of the infinite loop.

I also don't see anything guaranteeing that any given thread won't consume all of the data. not sure if you care.

I usually approach these things as a game. one player tries to make things blow up horribly (e.g. by pre-empting threads until very certain timings occur) and another player places locks to limit the actions of the attacker. in development, you get to be the attacker. in production, it's the OS and malicious agents.

akadajet
Sep 14, 2003

ratbert90 posted:

How about you stop using a text editor from the 70's?

this

Sapozhnik
Jan 2, 2005

Nap Ghost
have you considered some form of coroutine-based async io

feedmegin
Jul 30, 2008

ratbert90 posted:

How about you stop using a text editor from the 70's?

Emacs 4 lyfe bro

akadajet
Sep 14, 2003

feedmegin posted:

Emacs 4 lyfe bro

emacs is from the 50's

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat

meatpotato posted:

jesus christ concurrency is truly awful

thanks to both of you for the ideas, though I still don't understand either well enough to write it...

I'll keep reading your posts over and over until it clicks

maybe you can tell me what's bad about this, which is what I wrote before you replied:

code:
#include <chrono>
#include <iostream>
#include <thread>
#include <vector>

struct Buffer {
    std::shared_ptr<std::vector<int32_t>> data;

    std::mutex lock;
    std::condition_variable new_data;

    void update(std::shared_ptr<std::vector<int32_t>> samples){
        std::unique_lock<std::mutex> l(lock);
        data = samples;
        new_data.notify_all();
    }

    std::shared_ptr<std::vector<int32_t>> fetch(){
        std::unique_lock<std::mutex> l(lock);
        new_data.wait(l);
        return data;
    }
};

std::vector<int32_t> get_data()
{
    static int32_t iteration = 0;
    return std::vector<int32_t>(2048, iteration++);
}

void prod(Buffer& buf)
{
    while (1) {
        std::this_thread::sleep_for(std::chrono::milliseconds(250));
        std::cout << "Producing data\n";
        auto data = std::make_shared<std::vector<int32_t>>(get_data());
        buf.update(data);
    }
}

void consume(int id, Buffer& buf)
{
    while (1) {
        auto data = buf.fetch();
        std::this_thread::sleep_for(std::chrono::milliseconds(10));
        std::cout << "Consumer #" << id << " read data" << '\n';
        std::cout << "data[0] = " << data->at(0) << '\n';
    }
}

int main(int argc, char* argv[])
{
    std::cout << "Hello, world!\n";

    Buffer buf;

    std::thread producer(prod, std::ref(buf));
    std::thread c1(consume, 1, std::ref(buf));
    std::thread c2(consume, 2, std::ref(buf));
    std::thread c3(consume, 3, std::ref(buf));
    std::thread c4(consume, 4, std::ref(buf));

    c4.join();
    c3.join();
    c2.join();
    c1.join();
    producer.join();

    return 0;
}
I ripped a lot of this off some blog, not sure if I actually need to use std::ref or if I'm using mutexes, locks or condition variables correctly. It seems to work but consumers don't know if they missed anything. Also, it eats more CPU than I expect from a glance at 'top' meaning I'm probably doing something wrong.

Like I said, I wrote this before you replied... I'm not trying to be the goon in the well replacing your advice with my own lovely plans

Thanks again for the help!

you're constantly reassigning the data over and over again instead of just filling a buffer. you're better to read from hardware and write to a shared buffer.

suffix
Jul 27, 2013

Wheeee!

leper khan posted:

I usually approach these things as a game. one player tries to make things blow up horribly (e.g. by pre-empting threads until very certain timings occur) and another player places locks to limit the actions of the attacker. in development, you get to be the attacker. in production, it's the OS and malicious agents.

https://deadlockempire.github.io/
this is pretty good to put the fear in you if you've ever thought oh threading isn't so hard just use a mutex

suffix
Jul 27, 2013

Wheeee!
some people when faced with a problem think oh i'll just share the data between threads
now problems you people whenood to put 12q1 Crap > YOSPOS > terrible progr^A^A
segmentation fault

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat
ummm why else would it be called a SHARED_PTR???

checkmate, n00b

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

meatpotato posted:

jesus christ concurrency is truly awful

thanks to both of you for the ideas, though I still don't understand either well enough to write it...

yeah, so i was just here to have fun puzzling out a lock-free, copy-free algorithm. i think maybe you're struggling with a lot of things besides concurrency that i'm not going to have the time to explain, sorry

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

suffix posted:

https://deadlockempire.github.io/
this is pretty good to put the fear in you if you've ever thought oh threading isn't so hard just use a mutex

never said it isn't hard. mutexes work fairly well and aren't /that/ hard to work with. getting lock free solutions to work is where the fun really starts. for most things, locks are sufficient though, and there's no need to scare people away from offloading some simple computations onto background threads.

Hunter2 Thompson
Feb 3, 2005

Ramrod XTreme

rjmccall posted:

yeah, so i was just here to have fun puzzling out a lock-free, copy-free algorithm. i think maybe you're struggling with a lot of things besides concurrency that i'm not going to have the time to explain, sorry

lol yes programming is my struggle, I aspire to suck less

Thanks for your help though, it's given me things to ponder

Hunter2 Thompson
Feb 3, 2005

Ramrod XTreme

CRIP EATIN BREAD posted:

you're constantly reassigning the data over and over again instead of just filling a buffer. you're better to read from hardware and write to a shared buffer.

If I did this instead:

code:
void get_data(std::vector<int32_t>& data_out)
{
    static int32_t iteration = 0;
    data_out.clear();
    data_out.assign(2048, iteration++);
}

void prod(Buffer& buf)
{
    std::vector<int32_t> data_from_hw;
    data_from_hw.reserve(2048);

    while (1) {
        std::this_thread::sleep_for(std::chrono::milliseconds(250));
        std::cout << "Producing data\n";
        get_data(data_from_hw);
        buf.update(std::make_shared<std::vector<int32_t>>(data_from_hw));
    }
}
It's better because I'm not allocating a new buffer each time I get data, right?

I know I still need to fix the concurrency parts.

Ediot: lol I don't know the std::vector api well. I don't think assign() is what I want here...

edit1: no it actually is what I want. I'm loving something else up :/
edit2: nope ignore all these edits, I was missing the '&' in the args to get_data when I was messing around

Hunter2 Thompson fucked around with this message at 23:16 on Aug 7, 2017

necrotic
Aug 2, 2005
I owe my brother big time for this!

wrong modal editing is the best

Soricidus
Oct 21, 2010
freedom-hating statist shill

necrotic posted:

wrong modal editing is the best

I too love when i type something in and then realise a couple seconds later that I wasn't in insert mode. keeps life interesting

hobbesmaster
Jan 28, 2008

Soricidus posted:

I too love when i type something in and then realise a couple seconds later that I wasn't in insert mode. keeps life interesting

always hit escape

akadajet
Sep 14, 2003

necrotic posted:

wrong modal editing is the best

yeah, a text editor I can't just blindly type poo poo into. a great idea for the ages.

akadajet
Sep 14, 2003

you know what a good text editor is? visual studio code.

it's intuitive and good.

Adbot
ADBOT LOVES YOU

Corla Plankun
May 8, 2007

improve the lives of everyone
i still dont know how tf to paste into vim even though ive done it successfully like ten or twelve times

  • Locked thread